Science.gov

Sample records for large volume simulations

  1. Large Eddy Simulations of Volume Restriction Effects on Canopy-Induced Increased-Uplift Regions

    NASA Astrophysics Data System (ADS)

    Chatziefstratiou, E.; Bohrer, G.; Velissariou, V.

    2012-12-01

    ABSTRACT Previous modeling and empirical work have shown the development of important areas of increased uplift past forward-facing steps, and recirculation zones past backward-facing steps. Forests edges represent a special kind of step - a semi-porous one. Current models of the effects of forest edges on the flow represent the forest with a prescribed drag term and does not account for the effects of the solid volume in the forest that restrict the airflow. The RAMS-based Forest Large Eddy Simulation (RAFLES) resolves flows inside and above forested canopies. RAFLES is spatially explicit, and uses the finite volume method to solve a descretized set of Navier-Stokes equations. It accounts for vegetation drag effects on the flow and on the flux exchange between the canopy and the canopy air, proportional to the local leaf density. For a better representation of the vegetation structure in the numerical grid within the canopy sub-domain, the model uses a modified version of the cut cell coordinate system. The hard volume of vegetation elements, in forests, or buildings, in urban environments, within each numerical grid cell is represented via a sub-grid-scale process that shrinks the open apertures between grid cells and reduces the open cell volume. We used RAFLES to simulate the effects of a canopy of varying foliage and stem densities on flow over virtual qube-shaped barriers under neutrally buoyant conditions. We explicitly tested the effects of the numerical representation of volume restriction, independent of the effects of the leaf drag by comparing drag-only simulations, where we prescribed no volume or aperture restriction to the flow, restriction-only simulations, where we prescribed no drag, and control simulations, where both drag and volume plus aperture restriction were included. Our simulations show that representation of the effects of the volume and aperture restriction due to obstacles to flow is important (figure 1) and leads to differences in the

  2. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  3. Determination of the large scale volume weighted halo velocity bias in simulations

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-06-01

    A profound assumption in peculiar velocity cosmology is bv=1 at sufficiently large scales, where bv is the volume-weighted halo(galaxy) velocity bias with respect to the matter velocity field. However, this fundamental assumption has not been robustly verified in numerical simulations. Furthermore, it is challenged by structure formation theory (Bardeen, Bond, Kaiser and Szalay, Astrophys. J. 304, 15 (1986); Desjacques and Sheth, Phys. Rev D 81, 023526 (2010), which predicts the existence of velocity bias (at least for proto-halos) due to the fact that halos reside in special regions (local density peaks). The major obstacle to measuring the volume-weighted velocity from N-body simulations is an unphysical sampling artifact. It is entangled in the measured velocity statistics and becomes significant for sparse populations. With recently improved understanding of the sampling artifact (Zhang, Zheng and Jing, 2015, PRD; Zheng, Zhang and Jing, 2015, PRD), for the first time we are able to appropriately correct this sampling artifact and then robustly measure the volume-weighted halo velocity bias. (1) We verify bv=1 within 2% model uncertainty at k ≲0.1 h /Mpc and z =0 - 2 for halos of mass ˜1012- 1013h-1M⊙ and, therefore, consolidate a foundation for the peculiar velocity cosmology. (2) We also find statistically significant signs of bv≠1 at k ≳0.1 h /Mpc . Unfortunately, whether this is real or caused by a residual sampling artifact requires further investigation. Nevertheless, cosmology based on the k ≳0.1 h /Mpc velocity data should be careful with this potential velocity bias.

  4. Measurements of Elastic and Inelastic Properties under Simulated Earth's Mantle Conditions in Large Volume Apparatus

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.

    2012-12-01

    The interpretation of highly resolved seismic data from Earths deep interior require measurements of the physical properties of Earth's materials under experimental simulated mantle conditions. More than decade ago seismic tomography clearly showed subduction of crustal material can reach the core mantle boundary under specific circumstances. That means there is no longer space for the assumption deep mantle rocks might be much less complex than deep crustal rocks known from exhumation processes. Considering this geophysical high pressure research is faced the challenge to increase pressure and sample volume at the same time to be able to perform in situ experiments with representative complex samples. High performance multi anvil devices using novel materials are the most promising technique for this exciting task. Recent large volume presses provide sample volumes 3 to 7 orders of magnitude bigger than in diamond anvil cells far beyond transition zone conditions. The sample size of several cubic millimeters allows elastic wave frequencies in the low to medium MHz range. Together with the small and even adjustable temperature gradients over the whole sample this technique makes anisotropy and grain boundary effects in complex systems accessible for elastic and inelastic properties measurements in principle. The measurements of both elastic wave velocities have also no limits for opaque and encapsulated samples. The application of triple-mode transducers and the data transfer function technique for the ultrasonic interferometry reduces the time for saving the data during the experiment to about a minute or less. That makes real transient measurements under non-equilibrium conditions possible. A further benefit is, both elastic wave velocities are measured exactly simultaneously. Ultrasonic interferometry necessarily requires in situ sample deformation measurement by X-radiography. Time-resolved X-radiography makes in situ falling sphere viscosimetry and even the

  5. Inclusion of fluid-solid interaction in Volume of Fluid to simulate spreading and dewetting for large contact angles

    NASA Astrophysics Data System (ADS)

    Mahady, Kyle; Afkhami, Shahriar; Kondic, Lou

    2014-11-01

    The van der Waals (vdW) interaction between molecules is of fundamental importance in determining the behavior of three phase systems in fluid mechanics. This interaction gives rise to interfacial energies, and thus the contact angle for a droplet on a solid surface, and additionally leads to instability of very thin liquid films. We develop a hybrid method for including a Lennard-Jones type vdW interaction in a finite volume, Volume of Fluid (VoF) based solver for the full two-phase Navier-Stokes equations. This method includes the full interaction between each fluid phase and the solid substrate via a finite-volume approximation of the vdW body force. Our work is distinguished from conventional VoF based implementations in that the contact angle arises from simulation of the underlying physics, as well as successfully treating vdW induced film rupture. At the same time, it avoids the simplifications of calculations based on disjoining-pressure, where the vdW interaction is included as a pressure jump across the interface which is derived under the assumption of a flat film. This is especially relevant in the simulation of nanoscale film ruptures involving large contact angles, which have been studied recently in the context of bottom-up nanoparticle fabrication. This work is partially supported by the Grants NSF DMS-1320037 and CBET-1235710.

  6. Resolving the Effects of Aperture and Volume Restriction of the Flow by Semi-Porous Barriers Using Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Chatziefstratiou, Efthalia K.; Velissariou, Vasilia; Bohrer, Gil

    2014-09-01

    The Regional Atmospheric Modelling System (RAMS)-based Forest Large-Eddy Simulation (RAFLES) model is used to simulate the effects of large rectangular prism-shaped semi-porous barriers of varying densities under neutrally buoyant conditions. RAFLES model resolves flows inside and above forested canopies and other semi-porous barriers, and it accounts for barrier-induced drag on the flow and surface flux exchange between the barrier and the air. Unlike most other models, RAFLES model also accounts for the barrier-induced volume and aperture restriction via a modified version of the cut-cell coordinate system. We explicitly tested the effects of the numerical representation of volume restriction, independent of the effects of the drag, by comparing drag-only simulations (where we prescribed neither volume nor aperture restrictions to the flow), restriction-only simulations (where we prescribed no drag), and control simulations where both drag and volume plus aperture restrictions were included. Previous modelling and empirical work have revealed the development of important areas of increased uplift upwind of forward-facing steps, and recirculation zones downwind of backward-facing steps. Our simulations show that representation of the effects of the volume and aperture restriction due to the presence of semi-porous barriers leads to differences in the strengths and locations of increased-updraft and recirculation zones, and the length and strength of impact and adjustment zones when compared to simulation solutions with a drag-only representation. These are mostly driven by differences to the momentum budget of the streamwise wind velocity by resolved turbulence and pressure gradient fields around the front and back edges of the barrier. We propose that volume plus aperture restriction is an important component of the flow system in semi-porous environments such as forests and cities and should be considered by large-eddy simulation (LES).

  7. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  8. Large-eddy simulations of 3D Taylor-Green vortex: comparison of Smoothed Particle Hydrodynamics, Lattice Boltzmann and Finite Volume methods

    NASA Astrophysics Data System (ADS)

    Kajzer, A.; Pozorski, J.; Szewc, K.

    2014-08-01

    In the paper we present Large-eddy simulation (LES) results of 3D Taylor- Green vortex obtained by the three different computational approaches: Smoothed Particle Hydrodynamics (SPH), Lattice Boltzmann Method (LBM) and Finite Volume Method (FVM). The Smagorinsky model was chosen as a subgrid-scale closure in LES for all considered methods and a selection of spatial resolutions have been investigated. The SPH and LBM computations have been carried out with the use of the in-house codes executed on GPU and compared, for validation purposes, with the FVM results obtained using the open-source CFD software OpenFOAM. A comparative study in terms of one-point statistics and turbulent energy spectra shows a good agreement of LES results for all methods. An analysis of the GPU code efficiency and implementation difficulties has been made. It is shown that both SPH and LBM may offer a significant advantage over mesh-based CFD methods.

  9. Gyrokinetic large eddy simulations

    SciTech Connect

    Morel, P.; Navarro, A. Banon; Albrecht-Marc, M.; Carati, D.; Merz, F.; Goerler, T.; Jenko, F.

    2011-07-15

    The large eddy simulation approach is adapted to the study of plasma microturbulence in a fully three-dimensional gyrokinetic system. Ion temperature gradient driven turbulence is studied with the GENE code for both a standard resolution and a reduced resolution with a model for the sub-grid scale turbulence. A simple dissipative model for representing the effect of the sub-grid scales on the resolved scales is proposed and tested. Once calibrated, the model appears to be able to reproduce most of the features of the free energy spectra for various values of the ion temperature gradient.

  10. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  11. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  12. Large volume manufacture of dymalloy

    SciTech Connect

    1998-06-22

    The purpose of this research was to test the commercial viability and feasibility of Dymalloy, a composite material to measure thermal conductivity. Dymalloy was developed as part of a CRADA with Sun Microsystems. Sun Microsystems was a potential end user of Dymalloy as a substrate for MCMS. Sun had no desire to be involved in the manufacture of this material. The goal of this small business CRADA with Spectra Mat was to establish the high volume commercial manufacturing industry source for Dymalloy required by an end-user such as Sun Microsystems. The difference between the fabrication technique developed during the CRADA and this proposed work related to the mechanical technique of coating the diamond powder. Mechanical parts for the high-volume diamond powder coating process existed; however, they needed to be installed in an existing coating system for evaluation. Sputtering systems similar to the one required for this project were available at LLNL. Once the diamond powder was coated, both LLNL and Spectra Mat could make and test the Dymalloy composites. Spectra Mat manufactured Dymalloy composites in order to evaluate and establish a reasonable cost estimate on their existing processing capabilities. This information was used by Spectra Mat to define the market and cost-competitive products that could be commercialized from this new substrate material.

  13. Applied large eddy simulation.

    PubMed

    Tucker, Paul G; Lardeau, Sylvain

    2009-07-28

    Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity. PMID:19531503

  14. Large Eddy Simulation of Bubbly Flow and Slag Layer Behavior in Ladle with Discrete Phase Model (DPM)-Volume of Fluid (VOF) Coupled Model

    NASA Astrophysics Data System (ADS)

    Li, Linmin; Liu, Zhongqiu; Cao, Maoxue; Li, Baokuan

    2015-07-01

    In the ladle metallurgy process, the bubble movement and slag layer behavior is very important to the refining process and steel quality. For the bubble-liquid flow, bubble movement plays a significant role in the phase structure and causes the unsteady complex turbulent flow pattern. This is one of the most crucial shortcomings of the current two-fluid models. In the current work, a one-third scale water model is established to investigate the bubble movement and the slag open-eye formation. A new mathematical model using the large eddy simulation (LES) is developed for the bubble-liquid-slag-air four-phase flow in the ladle. The Eulerian volume of fluid (VOF) model is used for tracking the liquid-slag-air free surfaces and the Lagrangian discrete phase model (DPM) is used for describing the bubble movement. The turbulent liquid flow is induced by bubble-liquid interactions and is solved by LES. The procedure of bubble coming out of the liquid and getting into the air is modeled using a user-defined function. The results show that the present LES-DPM-VOF coupled model is good at predicting the unsteady bubble movement, slag eye formation, interface fluctuation, and slag entrainment.

  15. Volume Rendering of AMR Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Pomarède, D.; Chapon, D.; Teyssier, R.; Bournaud, F.; Renaud, F.; Grandjouan, N.

    2013-04-01

    High-resolution simulations often rely on the Adaptive Mesh Resolution (AMR) technique to optimize memory consumption versus attainable precision. While this technique allows for dramatic improvements in terms of computing performance, the analysis and visualization of its data outputs remain challenging. The lack of effective volume renderers for the octree-based AMR used by the RAMSES simulation program has led to the development of the solutions presented in this paper. Two custom algorithms are discussed, based on the splatting and the ray-casting techniques. Their usage is illustrated in the context of the visualization of a high-resolution, 6000-processor simulation of a Milky Way-like galaxy. Performance obtained in terms of memory management and parallelism speedup are presented.

  16. LARGE BUILDING HVAC SIMULATION

    EPA Science Inventory

    The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...

  17. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming. PMID:17080865

  18. Large volumes and spectroscopy of walking theories

    NASA Astrophysics Data System (ADS)

    Del Debbio, L.; Lucini, B.; Patella, A.; Pica, C.; Rago, A.

    2016-03-01

    A detailed investigation of finite-size effects is performed for SU(2) gauge theory with two fermions in the adjoint representation, which previous lattice studies have shown to be inside the conformal window. The system is investigated with different spatial and temporal boundary conditions on lattices of various spatial and temporal extensions, for two values of the bare fermion mass representing a heavy and light fermion regime. Our study shows that the infinite-volume limit of masses and decay constants in the mesonic sector is reached only when the mass of the pseudoscalar particle MPS and the spatial lattice size L satisfy the relation L MPS≥15 . This bound, which is at least a factor of three higher than what is observed in QCD, is a likely consequence of the different spectral signatures of the two theories, with the scalar isosinglet (0++ glueball) being the lightest particle in our model. In addition to stressing the importance of simulating large lattice sizes, our analysis emphasizes the need to understand quantitatively the full spectrum of the theory rather than just the spectrum in the mesonic isotriplet sector. While for the lightest fermion measuring masses from gluonic operators proves to be still challenging, reliable results for glueball states are obtained at the largest fermion mass and, in the mesonic sector, for both fermion masses. As a byproduct of our investigation, we perform a finite-size scaling of the pseudoscalar mass and decay constant. The data presented in this work support the conformal behavior of this theory with an anomalous dimension γ*≃0.37 .

  19. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  20. Large volume axionic Swiss cheese inflation

    NASA Astrophysics Data System (ADS)

    Misra, Aalok; Shukla, Pramod

    2008-09-01

    Continuing with the ideas of (Section 4 of) [A. Misra, P. Shukla, Moduli stabilization, large-volume dS minimum without anti-D3-branes, (non-)supersymmetric black hole attractors and two-parameter Swiss cheese Calabi Yau's, arXiv: 0707.0105 [hep-th], Nucl. Phys. B, in press], after inclusion of perturbative and non-perturbative α corrections to the Kähler potential and (D1- and D3-) instanton generated superpotential, we show the possibility of slow roll axionic inflation in the large volume limit of Swiss cheese Calabi Yau orientifold compactifications of type IIB string theory. We also include one- and two-loop corrections to the Kähler potential but find the same to be subdominant to the (perturbative and non-perturbative) α corrections. The NS NS axions provide a flat direction for slow roll inflation to proceed from a saddle point to the nearest dS minimum.

  1. Large volume flow-through scintillating detector

    DOEpatents

    Gritzo, Russ E.; Fowler, Malcolm M.

    1995-01-01

    A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

  2. Large mode-volume, large beta, photonic crystal laser resonator

    SciTech Connect

    Dezfouli, Mohsen Kamandar; Dignam, Marc M.

    2014-12-15

    We propose an optical resonator formed from the coupling of 13, L2 defects in a triangular-lattice photonic crystal slab. Using a tight-binding formalism, we optimized the coupled-defect cavity design to obtain a resonator with predicted single-mode operation, a mode volume five times that of an L2-cavity mode and a beta factor of 0.39. The results are confirmed using finite-difference time domain simulations. This resonator is very promising for use as a single mode photonic crystal vertical-cavity surface-emitting laser with high saturation output power compared to a laser consisting of one of the single-defect cavities.

  3. Mesoscale Ocean Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank

    2015-11-01

    The highest resolution global climate models (GCMs) can now resolve the largest scales of mesoscale dynamics in the ocean. This has the potential to increase the fidelity of GCMs. However, the effects of the smallest, unresolved, scales of mesoscale dynamics must still be parametrized. One such family of parametrizations are mesoscale ocean large eddy simulations (MOLES), but the effects of including MOLES in a GCM are not well understood. In this presentation, several MOLES schemes are implemented in a mesoscale-resolving GCM (CESM), and the resulting flow is compared with that produced by more traditional sub-grid parametrizations. Large eddy simulation (LES) is used to simulate flows where the largest scales of turbulent motion are resolved, but the smallest scales are not resolved. LES has traditionally been used to study 3D turbulence, but recently it has also been applied to idealized 2D and quasi-geostrophic (QG) turbulence. The MOLES presented here are based on 2D and QG LES schemes.

  4. LARGE volume string compactifications at finite temperature

    NASA Astrophysics Data System (ADS)

    Anguelova, Lilia; Calò, Vincenzo; Cicoli, Michele

    2009-10-01

    We present a detailed study of the finite-temperature behaviour of the LARGE Volume type IIB flux compactifications. We show that certain moduli can thermalise at high temperatures. Despite that, their contribution to the finite-temperature effective potential is always negligible and the latter has a runaway behaviour. We compute the maximal temperature Tmax, above which the internal space decompactifies, as well as the temperature T*, that is reached after the decay of the heaviest moduli. The natural constraint T* < Tmax implies a lower bound on the allowed values of the internal volume Script V. We find that this restriction rules out a significant range of values corresponding to smaller volumes of the order Script V ~ 104ls6, which lead to standard GUT theories. Instead, the bound favours values of the order Script V ~ 1015ls6, which lead to TeV scale SUSY desirable for solving the hierarchy problem. Moreover, our result favours low-energy inflationary scenarios with density perturbations generated by a field, which is not the inflaton. In such a scenario, one could achieve both inflation and TeV-scale SUSY, although gravity waves would not be observable. Finally, we pose a two-fold challenge for the solution of the cosmological moduli problem. First, we show that the heavy moduli decay before they can begin to dominate the energy density of the Universe. Hence they are not able to dilute any unwanted relics. And second, we argue that, in order to obtain thermal inflation in the closed string moduli sector, one needs to go beyond the present EFT description.

  5. SUSY's Ladder: reframing sequestering at Large Volume

    NASA Astrophysics Data System (ADS)

    Reece, Matthew; Xue, Wei

    2016-04-01

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague other supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. This gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.

  6. Comments on large-N volume independence

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2010-06-02

    We study aspects of the large-N volume independence on R{sup 3} X L{sup {Gamma}}, where L{sup {Gamma}} is a {Gamma}site lattice for Yang-Mills theory with adjoint Wilson-fermions. We find the critical number of lattice sites above which the center-symmetry analysis on L{sup {Gamma}} agrees with the one on the continuum S{sup 1}. For Wilson parameter set to one and {Gamma}{>=}2, the two analyses agree. One-loop radiative corrections to Wilson-line masses are finite, reminiscent of the UV-insensitivity of the Higgs mass in deconstruction/Little-Higgs theories. Even for theories with {Gamma}=1, volume independence in QCD(adj) may be guaranteed to work by tuning one low-energy effective field theory parameter. Within the parameter space of the theory, at most three operators of the 3d effective field theory exhibit one-loop UV-sensitivity. This opens the analytical prospect to study 4d non-perturbative physics by using lower dimensional field theories (d=3, in our example).

  7. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  8. A new large-volume multianvil system

    NASA Astrophysics Data System (ADS)

    Frost, D. J.; Poe, B. T.; Trønnes, R. G.; Liebske, C.; Duba, A.; Rubie, D. C.

    2004-06-01

    A scaled-up version of the 6-8 Kwai-type multianvil apparatus has been developed at the Bayerisches Geoinstitut for operation over ranges of pressure and temperature attainable in conventional systems but with much larger sample volumes. This split-cylinder multianvil system is used with a hydraulic press that can generate loads of up to 5000 t (50 MN). The six tool-steel outer-anvils define a cubic cavity of 100 mm edge-length in which eight 54 mm tungsten carbide cubic inner-anvils are compressed. Experiments are performed using Cr 2O 3-doped MgO octahedra and pyrophyllite gaskets. Pressure calibrations at room temperature and high temperature have been performed with 14/8, 18/8, 18/11, 25/17 and 25/15 OEL/TEL (octahedral edge-length/anvil truncation edge-length, in millimetre) configurations. All configurations tested reach a limiting plateau where the sample-pressure no longer increases with applied load. Calibrations with different configurations show that greater sample-pressure efficiency can be achieved by increasing the OEL/TEL ratio. With the 18/8 configuration the GaP transition is reached at a load of 2500 t whereas using the 14/8 assembly this pressure cannot be reached even at substantially higher loads. With an applied load of 2000 t the 18/8 can produce MgSiO 3 perovskite at 1900 °C with a sample volume of ˜20 mm 3, compared with <3 mm 3 in conventional multianvil systems at the same conditions. The large octahedron size and use of a stepped LaCrO 3 heater also results in significantly lower thermal gradients over the sample.

  9. Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces

    SciTech Connect

    Makarov, A. N.

    2015-09-15

    Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.

  10. Large area pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Kruer, Mark A. (Inventor)

    1999-01-01

    An advanced solar simulator illuminates the surface a very large solar array, such as one twenty feet by twenty feet in area, from a distance of about twenty-six feet with an essentially uniform intensity field of pulsed light of an intensity of one AMO, enabling the solar array to be efficiently tested with light that emulates the sun. Light modifiers sculpt a portion of the light generated by an electrically powered high power Xenon lamp and together with direct light from the lamp provide uniform intensity illumination throughout the solar array, compensating for the square law and cosine law reduction in direct light intensity, particularly at the corner locations of the array. At any location within the array the sum of the direct light and reflected light is essentially constant.

  11. Lagrangian volume deformations around simulated galaxies

    NASA Astrophysics Data System (ADS)

    Robles, S.; Domínguez-Tenreiro, R.; Oñorbe, J.; Martínez-Serrano, F. J.

    2015-07-01

    We present a detailed analysis of the local evolution of 206 Lagrangian Volumes (LVs) selected at high redshift around galaxy seeds, identified in a large-volume Λ cold dark matter (ΛCDM) hydrodynamical simulation. The LVs have a mass range of 1-1500 × 1010 M⊙. We follow the dynamical evolution of the density field inside these initially spherical LVs from z = 10 up to zlow = 0.05, witnessing highly non-linear, anisotropic mass rearrangements within them, leading to the emergence of the local cosmic web (CW). These mass arrangements have been analysed in terms of the reduced inertia tensor I_{ij}^r, focusing on the evolution of the principal axes of inertia and their corresponding eigendirections, and paying particular attention to the times when the evolution of these two structural elements declines. In addition, mass and component effects along this process have also been investigated. We have found that deformations are led by dark matter dynamics and they transform most of the initially spherical LVs into prolate shapes, i.e. filamentary structures. An analysis of the individual freezing-out time distributions for shapes and eigendirections shows that first most of the LVs fix their three axes of symmetry (like a skeleton) early on, while accretion flows towards them still continue. Very remarkably, we have found that more massive LVs fix their skeleton earlier on than less massive ones. We briefly discuss the astrophysical implications our findings could have, including the galaxy mass-morphology relation and the effects on the galaxy-galaxy merger parameter space, among others.

  12. Finite volume hydromechanical simulation in porous media

    NASA Astrophysics Data System (ADS)

    Nordbotten, Jan Martin

    2014-05-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media.

  13. Finite volume hydromechanical simulation in porous media

    PubMed Central

    Nordbotten, Jan Martin

    2014-01-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media. PMID:25574061

  14. Changes in leg volume during microgravity simulation

    NASA Technical Reports Server (NTRS)

    Thornton, William E.; Hedge, Vickie; Coleman, Eugene; Uri, John J.; Moore, Thomas P.

    1992-01-01

    Little published information exists regarding the magnitude and time course of cephalad fluid shift resulting from microgravity simulations. Six subjects were exposed to 150 min each at horizontal bed rest, 6-deg head-down tilt, and horizontal water immersion. Fluid shift was estimated by calculating leg volumes from eight serial girth measurements from groin to ankle before, during, and after exposure. Results were compared with data from the first 3 h of spacecraft. By the end of exposure, total leg volume for the six subjects decreased by 2.6 +/- 0.8 percent, 1.7 +/- 1.2 percent, and 4.0 +/- 1.6 percent for horizontal, head-down, and immersion, respectively. Changes had plateaued for horizontal and head-down and had slowed for immersion. Relatively more fluid was lost from the lower leg than the thigh for all three conditions, particularly head-down. During the first 3 h of spaceflight, total leg volume decreased by 8.6 percent, and relatively more fluid was lost from the thigh than the lower leg. The difference in volume changes in microgravity and simulated microgravity may be caused by the small transverse pressures still present in ground-based simulations and the extremely nonlinear compliance of tissue.

  15. Analysis of volume holographic storage allowing large-angle illumination

    NASA Astrophysics Data System (ADS)

    Shamir, Joseph

    2005-05-01

    Advanced technological developments have stimulated renewed interest in volume holography for applications such as information storage and wavelength multiplexing for communications and laser beam shaping. In these and many other applications, the information-carrying wave fronts usually possess narrow spatial-frequency bands, although they may propagate at large angles with respect to each other or a preferred optical axis. Conventional analytic methods are not capable of properly analyzing the optical architectures involved. For mitigation of the analytic difficulties, a novel approximation is introduced to treat narrow spatial-frequency band wave fronts propagating at large angles. This approximation is incorporated into the analysis of volume holography based on a plane-wave decomposition and Fourier analysis. As a result of the analysis, the recently introduced generalized Bragg selectivity is rederived for this more general case and is shown to provide enhanced performance for the above indicated applications. The power of the new theoretical description is demonstrated with the help of specific examples and computer simulations. The simulations reveal some interesting effects, such as coherent motion blur, that were predicted in an earlier publication.

  16. Large-volume sampling and preconcentration for trace explosives detection.

    SciTech Connect

    Linker, Kevin Lane

    2004-05-01

    A trace explosives detection system typically contains three subsystems: sample collection, preconcentration, and detection. Sample collection of trace explosives (vapor and particulate) through large volumes of airflow helps reduce sampling time while increasing the amount of dilute sample collected. Preconcentration of the collected sample before introduction into the detector improves the sensitivity of the detector because of the increase in sample concentration. By combining large-volume sample collection and preconcentration, an improvement in the detection of explosives is possible. Large-volume sampling and preconcentration is presented using a systems level approach. In addition, the engineering of large-volume sampling and preconcentration for the trace detection of explosives is explained.

  17. Large space systems technology, 1980, volume 1

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    The technological and developmental efforts in support of the large space systems technology are described. Three major areas of interests are emphasized: (1) technology pertient to large antenna systems; (2) technology related to large space systems; and (3) activities that support both antenna and platform systems.

  18. Simulation of hydrodynamics using large eddy simulation-second-order moment model in circulating fluidized beds

    NASA Astrophysics Data System (ADS)

    Juhui, Chen; Yanjia, Tang; Dan, Li; Pengfei, Xu; Huilin, Lu

    2013-07-01

    Flow behavior of gas and particles is predicted by the large eddy simulation of gas-second order moment of solid model (LES-SOM model) in the simulation of flow behavior in CFB. This study shows that the simulated solid volume fractions along height using a two-dimensional model are in agreement with experiments. The velocity, volume fraction and second-order moments of particles are computed. The second-order moments of clusters are calculated. The solid volume fraction, velocity and second order moments are compared at the three different model constants.

  19. Technologies for imaging neural activity in large volumes.

    PubMed

    Ji, Na; Freeman, Jeremy; Smith, Spencer L

    2016-08-26

    Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Conventional microscopy collects data from individual planes and cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point-spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for processing and analyzing volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics and helping elucidate how brain regions work in concert to support behavior. PMID:27571194

  20. Large volume continuous counterflow dialyzer has high efficiency

    NASA Technical Reports Server (NTRS)

    Mandeles, S.; Woods, E. C.

    1967-01-01

    Dialyzer separates macromolecules from small molecules in large volumes of solution. It takes advantage of the high area/volume ratio in commercially available 1/4-inch dialysis tubing and maintains a high concentration gradient at the dialyzing surface by counterflow.

  1. Large Volume Injection Techniques in Capillary Gas Chromatography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Large volume injection (LVI) is a prerequisite of modern gas chromatographic (GC) analysis, especially when trace sample components have to be determined at very low concentration levels. Injection of larger than usual sample volumes increases sensitivity and/or reduces (or even eliminates) the need...

  2. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  3. Large-Volume Gravid Traps Enhance Collection of Culex Vectors.

    PubMed

    Popko, David A; Walton, William E

    2016-06-01

    Gravid mosquito collections were compared among several large-volume (infusion volume ≥35 liters) gravid trap designs and the small-volume (infusion volume  =  6 liters) Centers for Disease Control and Prevention (CDC) gravid trap used routinely by vector control districts for vector and pathogen surveillance. The numbers of gravid Culex quinquefasciatus, Cx. tarsalis, and Cx. stigmatosoma collected by large gravid traps were greater than by the CDC gravid trap during nearly all overnight trials. Large-volume gravid traps collected on average 6.6-fold more adult female Culex mosquitoes compared to small-volume CDC gravid traps across 3 seasons during the 3 years of the studies. The differences in gravid mosquito collections between large-versus small-volume gravid traps were greatest during spring, when 8- to 56-fold more Culex individuals were collected using large-volume gravid traps. The proportion of gravid females in collections did not differ appreciably among the more effective trap designs tested. Important determinants of gravid trap performance were infusion container size and type as well as infusion volume, which determined the distance between the suction trap and the infusion surface. Of lesser importance for gravid trap performance were the number of suction traps, method of suction trap mounting, and infusion concentration. Fermentation of infusions between 1 and 4 wk weakly affected total mosquito collections, with Cx. stigmatosoma collections moderately enhanced by comparatively young and organically enriched infusions. A suction trap mounted above 100 liters of organic infusion housed in a 121-liter black plastic container collected the most gravid mosquitoes over the greatest range of experimental conditions, and a 35-liter infusion with side-mounted suction traps was a promising lesser-volume alternative design. PMID:27280347

  4. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  5. Large-signal klystron simulations using KLSC

    SciTech Connect

    Carlsten, B.E.; Ferguson, P.

    1997-10-01

    The authors describe large-signal klystron simulations using the particle-in-cell code KLSC. This code uses the induced-current model to describe the steady-state cavity modulations and resulting rf fields, and advances the space-charge fields through maxwell`s equations. In this paper, an eight-cavity, high-power S-band klystron simulation is used to highlight various aspects of this simulation technique. In particular, there are specific issues associated with modeling the input cavity, the gain circuit, and the large-signal circuit (including the output cavities), that have to be treated carefully.

  6. Cosmological moduli problem in large volume scenario and thermal inflation

    SciTech Connect

    Choi, Kiwoon; Park, Wan-Il; Shin, Chang Sub E-mail: wipark@kias.re.kr

    2013-03-01

    We show that in a large volume scenario of type IIB string or F-theory compactifications, single thermal inflation provides only a partial solution to the cosmological problem of the light volume modulus. We then clarify the conditions for double thermal inflation, being a simple extension of the usual single thermal inflation scenario, to solve the cosmological moduli problem in the case of relatively light moduli masses. Using a specific example, we demonstrate that double thermal inflation can be realized in large volume scenario in a natural manner, and the problem of the light volume modulus can be solved for the whole relevant mass range. We also find that right amount of baryon asymmetry and dark matter can be obtained via a late-time Affleck-Dine mechanism and the decays of the visible sector NLSP to flatino LSP.

  7. New Large Volume Press Beamlines at the Canadian Light Source

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.; Hormes, J.; Lauterjung, J.; Secco, R.; Hallin, E.

    2013-12-01

    The Canadian Light Source, the German Research Centre for Geosciences and the Western University recently agreed to establish two new large volume press beamlines at the Canadian Lightsource. As the first step a 250 tons DIA-LVP will be installed at the IDEAS beamline in 2014. The further development is associated with the construction of a superconducting wiggler beamline at the Brockhouse sector. A 1750 tons DIA LVP will be installed there about 2 years later. Up to the completion of this wiggler beamline the big press will be used for offline high pressure high temperature experiments under simulated Earth's mantle conditions. In addition to X-ray diffraction, all up-to-date high pressure techniques as ultrasonic interferometry, deformation analyses by X-radiography, X-ray densitometry, falling sphere viscosimetry, multi-staging etc. will be available at both beamlines. After the required commissioning the beamlines will be open to the worldwide user community from Geosciences, general material sciences, physics, chemistry, biology etc. based on the evaluation and ranking of the submitted user proposals by an international review panel.

  8. Large scale simulations of bidisperse emulsions and foams

    NASA Astrophysics Data System (ADS)

    Metsi, Efimia

    Emulsions and foams are of fundamental importance in a wide variety of industrial and natural processes. The macroscopic properties of these multiphase systems are determined by the viscous and interfacial interactions on the microscopic level. In previous research efforts, the realism of computer simulations has been limited by the cost of the computational algorithms which scale as O(N2), where N is the number of droplets. In our research, we have developed a novel, fast and efficient algorithm which scales as [O( N1n(N)]. The algorithm has been implemented to simulate the low Reynolds number flow of large-scale systems of monodisperse and bidisperse droplet suspensions. A comprehensive study has been performed to examine the effective viscosity of these systems as a function of the overall volume fraction, volume fraction of small droplets, Capillary number and droplet size ratio. Monodisperse systems exhibit disorder-order transitions at high volume fractions and low Capillary numbers. Bidisperse systems show a tendency toward cluster formation with small droplets interspersed among large droplets. To determine if the cluster formation leads to phase separation, simulations have been performed with the two droplet species arranged in ordered layers. It is found that the initial layers are destroyed, and the two phases mix, yielding clusters of small and large droplets. The mixing of the two phases and the cluster formation are investigated through linear and radial pairwise distribution functions of the two droplet species.

  9. Indian LSSC (Large Space Simulation Chamber) facility

    NASA Technical Reports Server (NTRS)

    Brar, A. S.; Prasadarao, V. S.; Gambhir, R. D.; Chandramouli, M.

    1988-01-01

    The Indian Space Agency has undertaken a major project to acquire in-house capability for thermal and vacuum testing of large satellites. This Large Space Simulation Chamber (LSSC) facility will be located in Bangalore and is to be operational in 1989. The facility is capable of providing 4 meter diameter solar simulation with provision to expand to 4.5 meter diameter at a later date. With such provisions as controlled variations of shroud temperatures and availability of infrared equipment as alternative sources of thermal radiation, this facility will be amongst the finest anywhere. The major design concept and major aspects of the LSSC facility are presented here.

  10. Molecular dynamics simulations of large macromolecular complexes

    PubMed Central

    Perilla, Juan R.; Goh, Boon Chong; Cassidy, C. Keith; Liu, Bo; Bernardi, Rafael C.; Rudack, Till; Yu, Hang; Wu, Zhe; Schulten, Klaus

    2015-01-01

    Connecting dynamics to structural data from diverse experimental sources, molecular dynamics simulations permit the exploration of biological phenomena in unparalleled detail. Advances in simulations are moving the atomic resolution descriptions of biological systems into the million-to-billion atom regime, in which numerous cell functions reside. In this opinion, we review the progress, driven by large-scale molecular dynamics simulations, in the study of viruses, ribosomes, bioenergetic systems, and other diverse applications. These examples highlight the utility of molecular dynamics simulations in the critical task of relating atomic detail to the function of supramolecular complexes, a task that cannot be achieved by smaller-scale simulations or existing experimental approaches alone. PMID:25845770

  11. REXOR 2 rotorcraft simulation model. Volume 1: Engineering documentation

    NASA Technical Reports Server (NTRS)

    Reaser, J. S.; Kretsinger, P. H.

    1978-01-01

    A rotorcraft nonlinear simulation called REXOR II, divided into three volumes, is described. The first volume is a development of rotorcraft mechanics and aerodynamics. The second is a development and explanation of the computer code required to implement the equations of motion. The third volume is a user's manual, and contains a description of code input/output as well as operating instructions.

  12. Large-Volume High-Pressure Mineral Physics in Japan

    NASA Astrophysics Data System (ADS)

    Liebermann, Robert C.; Prewitt, Charles T.; Weidner, Donald J.

    American high-pressure research with large sample volumes developed rapidly in the 1950s during the race to produce synthetic diamonds. At that time the piston cylinder, girdle (or belt), and tetrahedral anvil devices were invented. However, this development essentially stopped in the late 1950s, and while the diamond anvil cell has been used extensively in the United States with spectacular success for high-pressure experiments in small sample volumes, most of the significant technological advances in large-volume devices have taken place in Japan. Over the past 25 years, these technical advances have enabled a fourfold increase in pressure, with many important investigations of the chemical and physical properties of materials synthesized at high temperatures and pressures that cannot be duplicated with any apparatus currently available in the United States.

  13. A Warm Magnetoactive Plasma in a Large Volume of Space

    NASA Technical Reports Server (NTRS)

    Heiles, C.

    1984-01-01

    A diffuse ionized warm gas fills a large volume of space in the general direction of Radio Loop II. There are three types of observational evidence: Faraday rotation measures (RM's) of extragalactic sources; emission measures (EM's) derived from the H alpha emission line in the diffuse interstellar medium; and magnetic field strengths in HI clouds derived from Zeeman splitting observations.

  14. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  15. Large discharge-volume, silent discharge spark plug

    DOEpatents

    Kang, Michael

    1995-01-01

    A large discharge-volume spark plug for providing self-limiting microdischarges. The apparatus includes a generally spark plug-shaped arrangement of a pair of electrodes, where either of the two coaxial electrodes is substantially shielded by a dielectric barrier from a direct discharge from the other electrode, the unshielded electrode and the dielectric barrier forming an annular volume in which self-terminating microdischarges occur when alternating high voltage is applied to the center electrode. The large area over which the discharges occur, and the large number of possible discharges within the period of an engine cycle, make the present silent discharge plasma spark plug suitable for use as an ignition source for engines. In the situation, where a single discharge is effective in causing ignition of the combustible gases, a conventional single-polarity, single-pulse, spark plug voltage supply may be used.

  16. Large eddy simulation in the ocean

    NASA Astrophysics Data System (ADS)

    Scotti, Alberto

    2010-12-01

    Large eddy simulation (LES) is a relative newcomer to oceanography. In this review, both applications of traditional LES to oceanic flows and new oceanic LES still in an early stage of development are discussed. The survey covers LES applied to boundary layer flows, traditionally an area where LES has provided considerable insight into the physics of the flow, as well as more innovative applications, where new SGS closure schemes need to be developed. The merging of LES with large-scale models is also briefly reviewed.

  17. Spatial considerations during cryopreservation of a large volume sample.

    PubMed

    Kilbride, Peter; Lamb, Stephen; Milne, Stuart; Gibbons, Stephanie; Erro, Eloy; Bundy, James; Selden, Clare; Fuller, Barry; Morris, John

    2016-08-01

    There have been relatively few studies on the implications of the physical conditions experienced by cells during large volume (litres) cryopreservation - most studies have focused on the problem of cryopreservation of smaller volumes, typically up to 2 ml. This study explores the effects of ice growth by progressive solidification, generally seen during larger scale cryopreservation, on encapsulated liver hepatocyte spheroids, and it develops a method to reliably sample different regions across the frozen cores of samples experiencing progressive solidification. These issues are examined in the context of a Bioartificial Liver Device which requires cryopreservation of a 2 L volume in a strict cylindrical geometry for optimal clinical delivery. Progressive solidification cannot be avoided in this arrangement. In such a system optimal cryoprotectant concentrations and cooling rates are known. However, applying these parameters to a large volume is challenging due to the thermal mass and subsequent thermal lag. The specific impact of this to the cryopreservation outcome is required. Under conditions of progressive solidification, the spatial location of Encapsulated Liver Spheroids had a strong impact on post-thaw recovery. Cells in areas first and last to solidify demonstrated significantly impaired post-thaw function, whereas areas solidifying through the majority of the process exhibited higher post-thaw outcome. It was also found that samples where the ice thawed more rapidly had greater post-thaw viability 24 h post-thaw (75.7 ± 3.9% and 62.0 ± 7.2% respectively). These findings have implications for the cryopreservation of large volumes with a rigid shape and for the cryopreservation of a Bioartificial Liver Device. PMID:27256662

  18. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree

  19. AdS/CFT and Large-N Volume Independence

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2010-08-26

    We study the Eguchi-Kawai reduction in the strong-coupling domain of gauge theories via the gravity dual of N=4 super-Yang-Mills on R{sup 3} x S{sup 1}. We show that D-branes geometrize volume independence in the center-symmetric vacuum and give supergravity predictions for the range of validity of reduced large-N models at strong coupling.

  20. Large volume multiple-path nuclear pumped laser

    SciTech Connect

    Hohl, F.; Deyoung, R.J.

    1981-11-01

    Large volumes of gas are excited by using internal high reflectance mirrors that are arranged so that the optical path crosses back and forth through the excited gaseous medium. By adjusting the external dielectric mirrors of the laser, the number of paths through the laser cavity can be varied. Output powers were obtained that are substantially higher than the output powers of previous nuclear laser systems. Official Gazette of the U.S. Patent and Trademark Office

  1. Staged-volume radiosurgery for large arteriovenous malformations: a review.

    PubMed

    AlKhalili, Kenan; Chalouhi, Nohra; Tjoumakaris, Stavropoula; Rosenwasser, Robert; Jabbour, Pascal

    2014-09-01

    Stereotactic radiosurgery is an effective management strategy for properly selected patients with arteriovenous malformations (AVMs). However, the risk of postradiosurgical radiation-related injury is higher in patients with large AVMs. Multistaged volumetric management of large AVMs was undertaken to limit the radiation exposure to the surrounding normal brain. This strategy offers a promising method for obtaining high AVM obliteration rates with minimal normal tissue damage. The use of embolization as an adjunctive method in the treatment of large AVMs remains controversial. Unfortunately, staged-volume radiosurgery (SVR) has a number of potential pitfalls that affect the outcome. The aim of this article is to highlight the role of SVR in the treatment of large AVMs, to discuss the outcome comparing it to other treatment modalities, and to discuss the potential improvement that could be introduced to this method of treatment. PMID:25175440

  2. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  3. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  4. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  5. Large volume high-pressure cell for inelastic neutron scattering.

    PubMed

    Wang, W; Sokolov, D A; Huxley, A D; Kamenev, K V

    2011-07-01

    Inelastic neutron scattering measurements typically require two orders of magnitude longer data collection times and larger sample sizes than neutron diffraction studies. Inelastic neutron scattering measurements on pressurised samples are particularly challenging since standard high-pressure apparatus restricts sample volume, attenuates the incident and scattered beams, and contributes background scattering. Here, we present the design of a large volume two-layered piston-cylinder pressure cell with optimised transmission for inelastic neutron scattering experiments. The design and the materials selected for the construction of the cell enable its safe use to a pressure of 1.8 GPa with a sample volume in excess of 400 mm(3). The design of the piston seal eliminates the need for a sample container, thus providing a larger sample volume and reduced absorption. The integrated electrical plug with a manganin pressure gauge offers an accurate measurement of pressure over the whole range of operational temperatures. The performance of the cell is demonstrated by an inelastic neutron scattering study of UGe(2). PMID:21806195

  6. Simulating Pressure Effects of High-Flow Volumes

    NASA Technical Reports Server (NTRS)

    Kaufman, M.

    1985-01-01

    Dynamic test stresses realized without high-volume pumps. Assembled in Sections in gas-flow passage, contoured mandrel restricts flow rate to valve convenient for testing and spatially varies pressure on passage walls to simulate operating-pressure profile. Realistic test pressures thereby achieved without extremely high flow volumes.

  7. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05 simulations, in particular, allow us to also study the role and impact of the nuclear symmetry energy on these pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  8. The Large Area Pulsed Solar Simulator (LAPSS)

    NASA Technical Reports Server (NTRS)

    Mueller, R. L.

    1993-01-01

    A Large Area Pulsed Solar Simulator (LAPSS) has been installed at JPL. It is primarily intended to be used to illuminate and measure the electrical performance of photovoltaic devices. The simulator, originally manufactured by Spectrolab, Sylmar, California, occupies an area measuring about 3 meters wide by 12 meters long. The data acquisition and data processing subsystems have been modernized. Tests on the LAPSS performance resulted in better than +/- 2 percent uniformity of irradiance at the test plane and better than +/- 0.3 percent measurement repeatability after warm-up. Glass absorption filters are used to reduce the level of ultraviolet light emitted from the xenon flash lamps. This provides a close match to standard airmass zero and airmass 1.5 spectral irradiance distributions. The 2 millisecond light pulse prevents heating of the device under test, resulting in more reliable temperature measurements. Overall, excellent electrical performance measurements have been made of many different types and sizes of photovoltaic devices.

  9. Numerical simulation of large fabric filter

    NASA Astrophysics Data System (ADS)

    Sedláček, Jan; Kovařík, Petr

    2012-04-01

    Fabric filters are used in the wide range of industrial technologies for cleaning of incoming or exhaust gases. To achieve maximal efficiency of the discrete phase separation and long lifetime of the filter hoses, it is necessary to ensure uniform load on filter surface and to avoid impacts of heavy particles with high velocities to the filter hoses. The paper deals with numerical simulation of two phase flow field in a large fabric filter. The filter is composed of six chambers with approx. 1600 filter hoses in total. The model was simplified to one half of the filter, the filter hoses walls were substituted by porous zones. The model settings were based on experimental data, especially on the filter pressure drop. Unsteady simulations with different turbulence models were done. Flow field together with particles trajectories were analyzed. The results were compared with experimental observations.

  10. Large eddy simulations in 2030 and beyond.

    PubMed

    Piomelli, U

    2014-08-13

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier-Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise. PMID:25024415

  11. Large eddy simulations in 2030 and beyond

    PubMed Central

    Piomelli, U

    2014-01-01

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier–Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise. PMID:25024415

  12. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  13. Strategies for Interactive Visualization of Large Scale Climate Simulations

    NASA Astrophysics Data System (ADS)

    Xie, J.; Chen, C.; Ma, K.; Parvis

    2011-12-01

    single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.

  14. Geometric Measures of Large Biomolecules: Surface, Volume and Pockets

    PubMed Central

    Mach, Paul; Koehl, Patrice

    2011-01-01

    Geometry plays a major role in our attempt to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent (n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 seconds with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. PMID:21823134

  15. The Large Area Pulsed Solar Simulator (LAPSS)

    NASA Technical Reports Server (NTRS)

    Mueller, R. L.

    1994-01-01

    The Large Area Pulsed Solar Simulator (LAPSS) has been installed at JPL. It is primarily intended to be used to illuminate and measure the electrical performance of photovoltaic devices. The simulator, originally manufactured by Spectrolab, Sylmar, CA, occupies an area measuring about 3 m wide x 12 m long. The data acquisition and data processing subsystems have been modernized. Tests on the LAPSS performance resulted in better than plus or minus 2 percent uniformity of irradiance at the test plane and better than plus or minus 0.3 percent measurement repeatability after warm-up. Glass absorption filters reduce the ultraviolet light emitted from the xenon flash lamps. This results in a close match to three different standard airmass zero and airmass 1.5 spectral irradiances. The 2-ms light pulse prevents heating of the device under test, resulting in more reliable temperature measurements. Overall, excellent electrical performance measurements have been made of many different types and sizes of photovoltaic devices. Since the original printing of this publication, in 1993, the LAPSS has been operational and new capabilities have been added. This revision includes a new section relating to the installation of a method to measure the I-V curve of a solar cell or array exhibiting a large effective capacitance. Another new section has been added relating to new capabilities for plotting single and multiple I-V curves, and for archiving the I-V data and test parameters. Finally, a section has been added regarding the data acquisition electronics calibration.

  16. Large eddy simulation of longitudinal stationary vortices

    NASA Astrophysics Data System (ADS)

    Sreedhar, Madhu; Ragab, Saad

    1994-07-01

    The response of longitudinal stationary vortices when subjected to random perturbations is investigated using temporal large-eddy simulation. Simulations are obtained for high Reynolds numbers and at a low subsonic Mach number. The subgrid-scale stress tensor is modeled using the dynamic eddy-viscosity model. The generation of large-scale structures due to centrifugal instability and their subsequent breakdown to turbulence is studied. The following events are observed. Initially, ring-shaped structures appear around the vortex core. These structures are counter-rotating vortices similar to the donut-shaped structures observed in a Taylor-Couette flow between rotating cylinders. These structures subsequently interact with the vortex core resulting in a rapid decay of the vortex. The turbulent kinetic energy increases rapidly until saturation, and then a period of slow decay prevails. During the period of maximum turbulent kinetic energy, the normalized mean circulation profile exhibits a logarithmic region, in agreement with the universal inner profile of Hoffman and Joubert [J. Fluid Mech. 16, 395 (1963)].

  17. The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data

    NASA Technical Reports Server (NTRS)

    Hanke, C. R.; Nordwall, D. R.

    1970-01-01

    The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.

  18. Effect of large volume paracentesis on plasma volume--a cause of hypovolemia

    SciTech Connect

    Kao, H.W.; Rakov, N.E.; Savage, E.; Reynolds, T.B.

    1985-05-01

    Large volume paracentesis, while effectively relieving symptoms in patients with tense ascites, has been generally avoided due to reports of complications attributed to an acute reduction in intravascular volume. Measurements of plasma volume in these subjects have been by indirect methods and have not uniformly confirmed hypovolemia. We have prospectively evaluated 18 patients (20 paracenteses) with tense ascites and peripheral edema due to chronic liver disease undergoing 5 liter paracentesis for relief of symptoms. Plasma volume pre- and postparacentesis was assessed by a /sup 125/I-labeled human serum albumin dilution technique as well as by the change in hematocrit and postural blood pressure difference. No significant change in serum sodium, urea nitrogen, hematocrit or postural systolic blood pressure difference was noted at 24 or 48 hr after paracentesis. Serum creatinine at 24 hr after paracentesis was unchanged but a small but statistically significant increase in serum creatinine was noted at 48 hr postparacentesis. Plasma volume changed -2.7% (n = 6, not statistically significant) during the first 24 hr and -2.8% (n = 12, not statistically significant) during the 0- to 48-hr period. No complications from paracentesis were noted. These results suggest that 5 liter paracentesis for relief of symptoms is safe in patients with tense ascites and peripheral edema from chronic liver disease.

  19. Large volume loss during cleavage formation, Hamburg sequence, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Beutner, Edward C.; Charles, Emmanuel G.

    1985-11-01

    Green reduction spots in red slate of the Hamburg sequence exposed near Shartlesville, Pennsylvania, have axial ratios of 1.42:1.0:0.28 on the limbs of near-isoclinal folds and 1.0:0.79:0.41 in fold hinge zones. Conodont cusps and denticles within the reduction spots have been brittlely pulled apart and give independent measures of extension in various directions. Comparison of conodont extensions with reduction spot shapes on limbs and hinges indicates that sedimentary compaction of 44% preceded the tectonic strain associated with cleavage formation. This strain, having identical maximum extensions but greater shortening in fold hinges as compared to limbs, was characterized by 41% extension in X, no change in Y, 50% to 59% shortening in Z, and 29% to 42% tectonic volume loss. The general lack of directed overgrowths on grains reflects the large volume loss and contrasts with other slates, where deformation was an almost constant volume process and extension in X compensated for shortening in Z. *Present address: Department of Geology, Miami University, Oxford, Ohio 45056

  20. Flight Simulation Model Exchange. Volume 2; Appendices

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the appendices to the main report.

  1. Flight Simulation Model Exchange. Volume 1

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the results of the assessment.

  2. Efficient Large Volume Lentiviral Vector Production Using Flow Electroporation

    PubMed Central

    Witting, Scott R.; Li, Lin-Hong; Jasti, Aparna; Allen, Cornell; Cornetta, Kenneth; Brady, James; Shivakumar, Rama

    2012-01-01

    Abstract Lentiviral vectors are beginning to emerge as a viable choice for human gene therapy. Here, we describe a method that combines the convenience of a suspension cell line with a scalable, nonchemically based, and GMP-compliant transfection technique known as flow electroporation (EP). Flow EP parameters for serum-free adapted HEK293FT cells were optimized to limit toxicity and maximize titers. Using a third generation, HIV-based, lentiviral vector system pseudotyped with the vesicular stomatitis glycoprotein envelope, both small- and large-volume transfections produced titers over 1×108 infectious units/mL. Therefore, an excellent option for implementing large-scale, clinical lentiviral productions is flow EP of suspension cell lines. PMID:21933028

  3. Large Eddy Simulation of Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Wu, Ting; Cotton, William R.

    1999-01-01

    The Regional Atmospheric Modeling System (RAMS) with mesoscale interactive nested-grids and a Large-Eddy Simulation (LES) version of RAMS, coupled to two-moment microphysics and a new two-stream radiative code were used to investigate the dynamic, microphysical, and radiative aspects of the November 26, 1991 cirrus event. Wu (1998) describes the results of that research in full detail and is enclosed as Appendix 1. The mesoscale nested grid simulation successfully reproduced the large scale circulation as compared to the Mesoscale Analysis and Prediction System's (MAPS) analyses and other observations. Three cloud bands which match nicely to the three cloud lines identified in an observational study (Mace et al., 1995) are predicted on Grid #2 of the nested grids, even though the mesoscale simulation predicts a larger west-east cloud width than what was observed. Large-eddy simulations (LES) were performed to study the dynamical, microphysical, and radiative processes in the 26 November 1991 FIRE 11 cirrus event. The LES model is based on the RAMS version 3b developed at Colorado State University. It includes a new radiation scheme developed by Harrington (1997) and a new subgrid scale model developed by Kosovic (1996). The LES model simulated a single cloud layer for Case 1 and a two-layer cloud structure for Case 2. The simulations demonstrated that latent heat release can play a significant role in the formation and development of cirrus clouds. For the thin cirrus in Case 1, the latent heat release was insufficient for the cirrus clouds to become positively buoyant. However, in some special cases such as Case 2, positively buoyant cells can be embedded within the cirrus layers. These cells were so active that the rising updraft induced its own pressure perturbations that affected the cloud evolution. Vertical profiles of the total radiative and latent heating rates indicated that for well developed, deep, and active cirrus clouds, radiative cooling and latent

  4. Autonomic Closure for Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    King, Ryan; Hamlington, Peter; Dahm, Werner J. A.

    2015-11-01

    A new autonomic subgrid-scale closure has been developed for large eddy simulation (LES). The approach poses a supervised learning problem that captures nonlinear, nonlocal, and nonequilibrium turbulence effects without specifying a predefined turbulence model. By solving a regularized optimization problem on test filter scale quantities, the autonomic approach identifies a nonparametric function that represents the best local relation between subgrid stresses and resolved state variables. The optimized function is then applied at the grid scale to determine unknown LES subgrid stresses by invoking scale similarity in the inertial range. A priori tests of the autonomic approach on homogeneous isotropic turbulence show that the new approach is amenable to powerful optimization and machine learning methods and is successful for a wide range of filter scales in the inertial range. In these a priori tests, the autonomic closure substantially improves upon the dynamic Smagorinsky model in capturing the instantaneous, statistical, and energy transfer properties of the subgrid stress field.

  5. Large eddy simulation applications in gas turbines.

    PubMed

    Menzies, Kevin

    2009-07-28

    The gas turbine presents significant challenges to any computational fluid dynamics techniques. The combination of a wide range of flow phenomena with complex geometry is difficult to model in the context of Reynolds-averaged Navier-Stokes (RANS) solvers. We review the potential for large eddy simulation (LES) in modelling the flow in the different components of the gas turbine during a practical engineering design cycle. We show that while LES has demonstrated considerable promise for reliable prediction of many flows in the engine that are difficult for RANS it is not a panacea and considerable application challenges remain. However, for many flows, especially those dominated by shear layer mixing such as in combustion chambers and exhausts, LES has demonstrated a clear superiority over RANS for moderately complex geometries although at significantly higher cost which will remain an issue in making the calculations relevant within the design cycle. PMID:19531505

  6. Large eddy simulation of turbulent cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, A.; Mahesh, K.

    2015-12-01

    Large Eddy Simulation is employed to study two turbulent cavitating flows: over a cylinder and a wedge. A homogeneous mixture model is used to treat the mixture of water and water vapor as a compressible fluid. The governing equations are solved using a novel predictor- corrector method. The subgrid terms are modeled using the Dynamic Smagorinsky model. Cavitating flow over a cylinder at Reynolds number (Re) = 3900 and cavitation number (σ) = 1.0 is simulated and the wake characteristics are compared to the single phase results at the same Reynolds number. It is observed that cavitation suppresses turbulence in the near wake and delays three dimensional breakdown of the vortices. Next, cavitating flow over a wedge at Re = 200, 000 and σ = 2.0 is presented. The mean void fraction profiles obtained are compared to experiment and good agreement is obtained. Cavity auto-oscillation is observed, where the sheet cavity breaks up into a cloud cavity periodically. The results suggest LES as an attractive approach for predicting turbulent cavitating flows.

  7. Parallel Optimization with Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Talnikar, Chaitanya; Blonigan, Patrick; Bodart, Julien; Wang, Qiqi; Alex Gorodetsky Collaboration; Jasper Snoek Collaboration

    2014-11-01

    For design optimization results to be useful, the model used must be trustworthy. For turbulent flows, Large Eddy Simulations (LES) can capture separation and other phenomena that traditional models such as RANS struggle with. However, optimization with LES can be challenging because of noisy objective function evaluations. This noise is a consequence of the sampling error of turbulent statistics, or long time averaged quantities of interest, such as the drag of an airfoil or heat transfer to a turbine blade. The sampling error causes the objective function to vary noisily with respect to design parameters for finite time simulations. Furthermore, the noise decays very slowly as computational time increases. Therefore, robustness with noisy objective functions is a crucial prerequisite to optimization candidates for LES. One way of dealing with noisy objective functions is to filter the noise using a surrogate model. Bayesian optimization, which uses Gaussian processes as surrogates, has shown promise in optimizing expensive objective functions. The following talk presents a new approach for optimization with LES incorporating these ideas. Applications to flow control of a turbulent channel and the design of a turbine blade trailing edge are also discussed.

  8. Large eddy simulation of trailing edge noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Nitzkorski, Zane; Mahesh, Krishnan

    2015-11-01

    Noise generation is an important engineering constraint to many marine vehicles. A significant portion of the noise comes from propellers and rotors, specifically due to flow interactions at the trailing edge. Large eddy simulation is used to investigate the noise produced by a turbulent 45 degree beveled trailing edge and a NACA 0012 airfoil. A porous surface Ffowcs-Williams and Hawkings acoustic analogy is combined with a dynamic endcapping method to compute the sound. This methodology allows for the impact of incident flow noise versus the total noise to be assessed. LES results for the 45 degree beveled trailing edge are compared to experiment at M = 0 . 1 and Rec = 1 . 9 e 6 . The effect of boundary layer thickness on sound production is investigated by computing using both the experimental boundary layer thickness and a thinner boundary layer. Direct numerical simulation results of the NACA 0012 are compared to available data at M = 0 . 4 and Rec = 5 . 0 e 4 for both the hydrodynamic field and the acoustic field. Sound intensities and directivities are investigated and compared. Finally, some of the physical mechanisms of far-field noise generation, common to the two configurations, are discussed. Supported by Office of Naval research.

  9. Simulation of large acceptance LINAC for muons

    SciTech Connect

    Miyadera, H; Kurennoy, S; Jason, A J

    2010-01-01

    There has been a recent need for muon accelerators not only for future Neutrino Factories and Muon Colliders but also for other applications in industry and medical use. We carried out simulations on a large-acceptance muon linac with a new concept 'mixed buncher/acceleration'. The linac can accept pions/muons from a production target with large acceptance and accelerate muon without any beam cooling which makes the initial section of muon-linac system very compact. The linac has a high impact on Neutrino Factory and Muon Collider (NF/MC) scenario since the 300-m injector section can be replaced by the muon linac of only 10-m length. The current design of the linac consists of the following components: independent 805-MHz cavity structure with 6- or 8-cm-radius aperture window; injection of a broad range of pion/muon energies, 10-100 MeV, and acceleration to 150 - 200 MeV. Further acceleration of the muon beam are relatively easy since the beam is already bunched.

  10. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  11. Large eddy simulations of laminar separation bubble

    NASA Astrophysics Data System (ADS)

    Cadieux, Francois

    The flow over blades and airfoils at moderate angles of attack and Reynolds numbers ranging from ten thousand to a few hundred thousands undergoes separation due to the adverse pressure gradient generated by surface curvature. In many cases, the separated shear layer then transitions to turbulence and reattaches, closing off a recirculation region -- the laminar separation bubble. To avoid body-fitted mesh generation problems and numerical issues, an equivalent problem for flow over a flat plate is formulated by imposing boundary conditions that lead to a pressure distribution and Reynolds number that are similar to those on airfoils. Spalart & Strelet (2000) tested a number of Reynolds-averaged Navier-Stokes (RANS) turbulence models for a laminar separation bubble flow over a flat plate. Although results with the Spalart-Allmaras turbulence model were encouraging, none of the turbulence models tested reliably recovered time-averaged direct numerical simulation (DNS) results. The purpose of this work is to assess whether large eddy simulation (LES) can more accurately and reliably recover DNS results using drastically reduced resolution -- on the order of 1% of DNS resolution which is commonly achievable for LES of turbulent channel flows. LES of a laminar separation bubble flow over a flat plate are performed using a compressible sixth-order finite-difference code and two incompressible pseudo-spectral Navier-Stokes solvers at resolutions corresponding to approximately 3% and 1% of the chosen DNS benchmark by Spalart & Strelet (2000). The finite-difference solver is found to be dissipative due to the use of a stability-enhancing filter. Its numerical dissipation is quantified and found to be comparable to the average eddy viscosity of the dynamic Smagorinsky model, making it difficult to separate the effects of filtering versus those of explicit subgrid-scale modeling. The negligible numerical dissipation of the pseudo-spectral solvers allows an unambiguous

  12. Cardiovascular simulator improvement: pressure versus volume loop assessment.

    PubMed

    Fonseca, Jeison; Andrade, Aron; Nicolosi, Denys E C; Biscegli, José F; Leme, Juliana; Legendre, Daniel; Bock, Eduardo; Lucchi, Julio Cesar

    2011-05-01

    This article presents improvement on a physical cardiovascular simulator (PCS) system. Intraventricular pressure versus intraventricular volume (PxV) loop was obtained to evaluate performance of a pulsatile chamber mimicking the human left ventricle. PxV loop shows heart contractility and is normally used to evaluate heart performance. In many heart diseases, the stroke volume decreases because of low heart contractility. This pathological situation must be simulated by the PCS in order to evaluate the assistance provided by a ventricular assist device (VAD). The PCS system is automatically controlled by a computer and is an auxiliary tool for VAD control strategies development. This PCS system is according to a Windkessel model where lumped parameters are used for cardiovascular system analysis. Peripheral resistance, arteries compliance, and fluid inertance are simulated. The simulator has an actuator with a roller screw and brushless direct current motor, and the stroke volume is regulated by the actuator displacement. Internal pressure and volume measurements are monitored to obtain the PxV loop. Left chamber internal pressure is directly obtained by pressure transducer; however, internal volume has been obtained indirectly by using a linear variable differential transformer, which senses the diaphragm displacement. Correlations between the internal volume and diaphragm position are made. LabVIEW integrates these signals and shows the pressure versus internal volume loop. The results that have been obtained from the PCS system show PxV loops at different ventricle elastances, making possible the simulation of pathological situations. A preliminary test with a pulsatile VAD attached to PCS system was made. PMID:21595711

  13. SUSY’s Ladder: Reframing sequestering at Large Volume

    DOE PAGESBeta

    Reece, Matthew; Xue, Wei

    2016-04-07

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague othermore » supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.« less

  14. Large volume water sprays for dispersing warm fogs

    NASA Astrophysics Data System (ADS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.

    A new method for dispersing of warm fogs which impede visibility and alter schedules is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray-induced air flow; the fog droplets are removed by coalescence/rainout. The efficiency of this fog droplet removal process depends on the size spectra of the spray drops and optimum spray drop size is calculated as between 0.3-1.0 mm in diameter. Water spray tests were conducted in order to determine the drop size spectra and temperature response of sprays produced by commercially available fire-fighting nozzles, and nozzle array tests were utilized to study air flow patterns and the thermal properties of the overall system. The initial test data reveal that the fog-dispersal procedure is effective.

  15. Large space telescope, phase A. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Phase A study of the Large Space Telescope (LST) is reported. The study defines an LST concept based on the broad mission guidelines provided by the Office of Space Science (OSS), the scientific requirements developed by OSS with the scientific community, and an understanding of long range NASA planning current at the time the study was performed. The LST is an unmanned astronomical observatory facility, consisting of an optical telescope assembly (OTA), scientific instrument package (SIP), and a support systems module (SSM). The report consists of five volumes. The report describes the constraints and trade off analyses that were performed to arrive at a reference design for each system and for the overall LST configuration. A low cost design approach was followed in the Phase A study. This resulted in the use of standard spacecraft hardware, the provision for maintenance at the black box level, growth potential in systems designs, and the sharing of shuttle maintenance flights with other payloads.

  16. Large volume water sprays for dispersing warm fogs

    NASA Technical Reports Server (NTRS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.

    1986-01-01

    A new method for dispersing of warm fogs which impede visibility and alter schedules is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray-induced air flow; the fog droplets are removed by coalescence/rainout. The efficiency of this fog droplet removal process depends on the size spectra of the spray drops and optimum spray drop size is calculated as between 0.3-1.0 mm in diameter. Water spray tests were conducted in order to determine the drop size spectra and temperature response of sprays produced by commercially available fire-fighting nozzles, and nozzle array tests were utilized to study air flow patterns and the thermal properties of the overall system. The initial test data reveal that the fog-dispersal procedure is effective.

  17. Striped Bass, morone saxatilis, egg incubation in large volume jars

    USGS Publications Warehouse

    Harper, C.J.; Wrege, B.M.; Jeffery, Isely J.

    2010-01-01

    The standard McDonald jar was compared with a large volume jar for striped bass, Morone saxatilis, egg incubation. The McDonald jar measured 16 cm in diameter by 45 cm in height and had a volume of 6 L. The experimental jar measured 0.4 m in diameter by 1.3 m in height and had a volume of 200 L. The hypothesis is that there is no difference in percent survival of fry hatched in experimental jars compared with McDonald jars. Striped bass brood fish were collected from the Coosa River and spawned using the dry spawn method of fertilization. Four McDonald jars were stocked with approximately 150 g of eggs each. Post-hatch survival was estimated at 48, 96, and 144 h. Stocking rates resulted in an average egg loading rate (??1 SE) in McDonald jars of 21.9 ?? 0.03 eggs/mL and in experimental jars of 10.9 ?? 0.57 eggs/mL. The major finding of this study was that average fry survival was 37.3 ?? 4.49% for McDonald jars and 34.2 ?? 3.80% for experimental jars. Although survival in experimental jars was slightly less than in McDonald jars, the effect of container volume on survival to 48 h (F = 6.57; df = 1,5; P > 0.05), 96 h (F = 0.02; df = 1, 4; P > 0.89), and 144 h (F = 3.50; df = 1, 4; P > 0.13) was not statistically significant. Mean survival between replicates ranged from 14.7 to 60.1% in McDonald jars and from 10.1 to 54.4% in experimental jars. No effect of initial stocking rate on survival (t = 0.06; df = 10; P > 0.95) was detected. Experimental jars allowed for incubation of a greater number of eggs in less than half the floor space of McDonald jars. As hatchery production is often limited by space or water supply, experimental jars offer an alternative to extending spawning activities, thereby reducing labor and operations cost. As survival was similar to McDonald jars, the experimental jar is suitable for striped bass egg incubation. ?? Copyright by the World Aquaculture Society 2010.

  18. Large eddy simulation of powered Fontan hemodynamics.

    PubMed

    Delorme, Y; Anupindi, K; Kerlo, A E; Shetty, D; Rodefeld, M; Chen, J; Frankel, S

    2013-01-18

    Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2-3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3-5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a "biventricular Fontan" circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo(TM)) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data. PMID:23177085

  19. Large Eddy Simulation of Powered Fontan Hemodynamics

    PubMed Central

    Delorme, Y.; Anupindi, K.; Kerlo, A.E.; Shetty, D.; Rodefeld, M.; Chen, J.; Frankel, S.

    2012-01-01

    Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2–3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3–5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a “biventricular Fontan” circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo™) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data. PMID:23177085

  20. Simulation of preburner sprays, volume 1

    NASA Technical Reports Server (NTRS)

    1993-01-01

    nozzles were compared with that of three identical nozzles with their axis at a small distance from each other. This study simulates the sprays in the preburner of the SSME, where there are around 260 elements on the faceplate of the combustion chamber. And lastly, design an experimental facility to study the characteristics of sprays at high pressure conditions and at supercritical pressure and temperature for the gas but supercritical pressure and subcritical temperature for the liquid.

  1. Simulation of preburner sprays, volume 1

    NASA Astrophysics Data System (ADS)

    1993-05-01

    nozzles were compared with that of three identical nozzles with their axis at a small distance from each other. This study simulates the sprays in the preburner of the SSME, where there are around 260 elements on the faceplate of the combustion chamber. And lastly, design an experimental facility to study the characteristics of sprays at high pressure conditions and at supercritical pressure and temperature for the gas but supercritical pressure and subcritical temperature for the liquid.

  2. Large Eddy Simulation of Transitional Boundary Layer

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Moin, Parviz

    2009-11-01

    A sixth order compact finite difference code is employed to investigate compressible Large Eddy Simulation (LES) of subharmonic transition of a spatially developing zero pressure gradient boundary layer, at Ma = 0.2. The computational domain extends from Rex= 10^5, where laminar blowing and suction excites the most unstable fundamental and sub-harmonic modes, to fully turbulent stage at Rex= 10.1x10^5. Numerical sponges are used in the neighborhood of external boundaries to provide non-reflective conditions. Our interest lies in the performance of the dynamic subgrid scale (SGS) model [1] in the transition process. It is observed that in early stages of transition the eddy viscosity is much smaller than the physical viscosity. As a result the amplitudes of selected harmonics are in very good agreement with the experimental data [2]. The model's contribution gradually increases during the last stages of transition process and the dynamic eddy viscosity becomes fully active and dominant in the turbulent region. Consistent with this trend the skin friction coefficient versus Rex diverges from its laminar profile and converges to the turbulent profile after an overshoot. 1. Moin P. et. al. Phys Fluids A, 3(11), 2746-2757, 1991. 2. Kachanov Yu. S. et. al. JFM, 138, 209-247, 1983.

  3. Large-Eddy Simulation of Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Pruett, C. David; Sochacki, James S.

    1999-01-01

    This report summarizes work accomplished under a one-year NASA grant from NASA Langley Research Center (LaRC). The effort culminates three years of NASA-supported research under three consecutive one-year grants. The period of support was April 6, 1998, through April 5, 1999. By request, the grant period was extended at no-cost until October 6, 1999. Its predecessors have been directed toward adapting the numerical tool of large-eddy simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of subgrid-scale (SGS) models that incorporate time-domain filters.

  4. Large Scale Computer Simulation of Erthocyte Membranes

    NASA Astrophysics Data System (ADS)

    Harvey, Cameron; Revalee, Joel; Laradji, Mohamed

    2007-11-01

    The cell membrane is crucial to the life of the cell. Apart from partitioning the inner and outer environment of the cell, they also act as a support of complex and specialized molecular machinery, important for both the mechanical integrity of the cell, and its multitude of physiological functions. Due to its relative simplicity, the red blood cell has been a favorite experimental prototype for investigations of the structural and functional properties of the cell membrane. The erythrocyte membrane is a composite quasi two-dimensional structure composed essentially of a self-assembled fluid lipid bilayer and a polymerized protein meshwork, referred to as the cytoskeleton or membrane skeleton. In the case of the erythrocyte, the polymer meshwork is mainly composed of spectrin, anchored to the bilayer through specialized proteins. Using a coarse-grained model, recently developed by us, of self-assembled lipid membranes with implicit solvent and using soft-core potentials, we simulated large scale red-blood-cells bilayers with dimensions ˜ 10-1 μm^2, with explicit cytoskeleton. Our aim is to investigate the renormalization of the elastic properties of the bilayer due to the underlying spectrin meshwork.

  5. Volume visualization of multiple alignment of large genomicDNA

    SciTech Connect

    Shah, Nameeta; Dillard, Scott E.; Weber, Gunther H.; Hamann, Bernd

    2005-07-25

    Genomes of hundreds of species have been sequenced to date, and many more are being sequenced. As more and more sequence data sets become available, and as the challenge of comparing these massive ''billion basepair DNA sequences'' becomes substantial, so does the need for more powerful tools supporting the exploration of these data sets. Similarity score data used to compare aligned DNA sequences is inherently one-dimensional. One-dimensional (1D) representations of these data sets do not effectively utilize screen real estate. As a result, tools using 1D representations are incapable of providing informatory overview for extremely large data sets. We present a technique to arrange 1D data in 3D space to allow us to apply state-of-the-art interactive volume visualization techniques for data exploration. We demonstrate our technique using multi-millions-basepair-long aligned DNA sequence data and compare it with traditional 1D line plots. The results show that our technique is superior in providing an overview of entire data sets. Our technique, coupled with 1D line plots, results in effective multi-resolution visualization of very large aligned sequence data sets.

  6. Monte Carlo Simulations for Dosimetry in Prostate Radiotherapy with Different Intravesical Volumes and Planning Target Volume Margins

    PubMed Central

    Lv, Wei; Yu, Dong; He, Hengda; Liu, Qian

    2016-01-01

    In prostate radiotherapy, the influence of bladder volume variation on the dose absorbed by the target volume and organs at risk is significant and difficult to predict. In addition, the resolution of a typical medical image is insufficient for visualizing the bladder wall, which makes it more difficult to precisely evaluate the dose to the bladder wall. This simulation study aimed to quantitatively investigate the relationship between the dose received by organs at risk and the intravesical volume in prostate radiotherapy. The high-resolution Visible Chinese Human phantom and the finite element method were used to construct 10 pelvic models with specific intravesical volumes ranging from 100 ml to 700 ml to represent bladders of patients with different bladder filling capacities during radiotherapy. This series of models was utilized in six-field coplanar 3D conformal radiotherapy simulations with different planning target volume (PTV) margins. Each organ’s absorbed dose was calculated using the Monte Carlo method. The obtained bladder wall displacements during bladder filling were consistent with reported clinical measurements. The radiotherapy simulation revealed a linear relationship between the dose to non-targeted organs and the intravesical volume and indicated that a 10-mm PTV margin for a large bladder and a 5-mm PTV margin for a small bladder reduce the effective dose to the bladder wall to similar degrees. However, larger bladders were associated with evident protection of the intestines. Detailed dosimetry results can be used by radiation oncologists to create more accurate, individual water preload protocols according to the patient’s anatomy and bladder capacity. PMID:27441944

  7. Successful pregnancies with directional freezing of large volume buck semen.

    PubMed

    Gacitua, H; Arav, A

    2005-02-01

    Artificial insemination with frozen-thawed buck semen shows variable results which depend on many factors related to semen quality and the cryopreservation processing. We conducted experiments based on a new freezing method, directional freezing, of large volumes (8 ml). In the first experiment semen from three Saanen bucks, ages 1-2-years-old and genetically selected for milk improvement, was frozen individually. Two to three-years-old Saanen females (n = 164) were synchronized with controlled internal drug release (CIDR), pregnant mare serum gonadotrophin (PMSG) and prostaglandin. Double cervical inseminations were performed with frozen-thawed semen and fresh semen as control. In the second experiment we used pooled, washed frozen semen to examine the effect of washed seminal plasma. The motility after washing was 80-90% and after thawing was 55-65% for all bucks. The sperm concentration increased with the collections and the advance into the breeding season from 1.9 x 10(9) to 4.4 x 10(9) cell/ml average. Two inseminations were carried out at 8h intervals. The first insemination was performed at 32 h after CIDR withdrawal with fresh and frozen-thawed semen. Pregnancy rates were assessed by ultrasonography conducted 40 and 90 days post-insemination (from three bucks). Results were 58, 67, 50% with fresh semen, and for frozen semen were 33, 37 and 53%; these results were significantly different in one of the three bucks (P < 0.005). In the second experiment with pooled, washed semen the pregnancy rate was 41.6%, which compared with the average results of the frozen semen in the first experiment 38.9% no significant difference was found. We conclude that freezing buck semen in large volumes (8 ml) is possible. Cryobanking of buck semen will facilitate a genetic breeding program in goats and preservation of biodiversity. Washed semen did not improve the fertility of the semen when Andromed bull extender is used. PMID:15629809

  8. Computer simulation of preflight blood volume reduction as a countermeasure to fluid shifts in space flight

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R.; Charles, J. B.

    1992-01-01

    Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.

  9. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  10. Interactive stereoscopic visualization of large-scale astrophysical simulations

    NASA Astrophysics Data System (ADS)

    Kaehler, Ralf; Abel, Tom

    2012-03-01

    In the last decades three-dimensional, time-dependent numerical simulations have become a standard tool in astrophysics and cosmology. This gave rise to a growing demand for analysis methods that are tailored to this type of simulation data, for example high-quality visualization approaches such as direct volume rendering and the display of stream lines. The modelled phenomena in numerical astrophysics usually involve complex spatial and temporal structures, and stereoscopic display techniques have proven to be particularly beneficial to clarify the spatial relationships of the relevant features. In this paper we present a flexible software framework for interactive stereoscopic visualizations of large time-dependent, three-dimensional astrophysical and cosmological simulation datasets. It is designed to enable fast and intuitive creation of complete rendering workflows, from importing datasets, the definition of various parameters, including camera paths and stereoscopic settings, to the storage of the final images in various output formats. It leverages the power of modern graphics processing units (GPUs) and supports high-quality floating-point precision throughout the whole rendering pipeline. All functionality is scriptable through Javascript. We give several application examples, including sequences produced for a number of planetarium shows.

  11. A large volume flat coil probe for oriented membrane proteins.

    PubMed

    Gor'kov, Peter L; Chekmenev, Eduard Y; Fu, Riqiang; Hu, Jun; Cross, Timothy A; Cotten, Myriam; Brey, William W

    2006-07-01

    15N detection of mechanically aligned membrane proteins benefits from large sample volumes that compensate for the low sensitivity of the observe nuclei, dilute sample preparation, and for the poor filling factor arising from the presence of alignment plates. Use of larger multi-tuned solenoids, however, is limited by wavelength effects that lead to inhomogeneous RF fields across the sample, complicating cross-polarization experiments. We describe a 600 MHz 15N-1H solid-state NMR probe with large (580 mm3) RF solenoid for high-power, multi-pulse sequence experiments, such as polarization inversion spin exchange at the magic angle (PISEMA). In order to provide efficient detection for 15N, a 4-turn solenoidal sample coil is used that exceeds 0.27 lambda at the 600 MHz 1H resonance. A balanced tuning-matching circuit is employed to preserve RF homogeneity across the sample for adequate magnetization transfer from 1H to 15N. We describe a procedure for optimization of the shorted 1/4 lambda coaxial trap that allows for the sufficiently strong RF fields in both 1H and 15N channels to be achieved within the power limits of 300 W 1H and 1 kW 15N amplifiers. The 8 x 6 x 12 mm solenoid sustains simultaneous B1 irradiation of 100 kHz at 1H frequency and 51 kHz at 15N frequency for at least 5 ms with 265 and 700 W of input power in the respective channels. The probe functionality is demonstrated by 2D 15N-1H PISEMA spectroscopy for two applications at 600 MHz. PMID:16580852

  12. Novel multi-slit large-volume air sampler.

    PubMed

    Buchanan, L M; Decker, H M; Frisque, D E; Phillips, C R; Dahlgren, C M

    1968-08-01

    Scientific investigators who are interested in the various facets of airborne transmission of disease in research laboratories and hospitals need a simple, continuous, high-volume sampling device that will recover a high percentage of viable microorganisms from the atmosphere. Such a device must sample a large quantity of air. It should effect direct transfer of the air into an all-purpose liquid medium in order to collect bacteria, viruses, rickettsia, and fungi, and it should be easy to use. A simple multi-slit impinger sampler that fulfills these requirements has been developed. It operates at an air-sampling rate of 500 liters/min, has a high collection efficiency, functions at a low pressure drop, and, in contrast to some earlier instruments, does not depend upon electrostatic precipitation at high voltages. When compared to the all-glass impinger, the multi-slit impinger sampler collected microbial aerosols of Serratia marcescens at 82% efficiency, and aerosols of Bacillus subtilis var. niger at 78% efficiency. PMID:4970892

  13. Simulation of Large-Scale HPC Architectures

    SciTech Connect

    Jones, Ian S; Engelmann, Christian

    2011-01-01

    The Extreme-scale Simulator (xSim) is a recently developed performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads. It allows observing parallel application performance properties in a simulated extreme-scale HPC system to further assist in HPC hardware and application software co-design on the road toward multi-petascale and exascale computing. This paper presents a newly implemented network model for the xSim performance investigation toolkit that is capable of providing simulation support for a variety of HPC network architectures with the appropriate trade-off between simulation scalability and accuracy. The taken approach focuses on a scalable distributed solution with latency and bandwidth restrictions for the simulated network. Different network architectures, such as star, ring, mesh, torus, twisted torus and tree, as well as hierarchical combinations, such as to simulate network-on-chip and network-on-node, are supported. Network traffic congestion modeling is omitted to gain simulation scalability by reducing simulation accuracy.

  14. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE PAGESBeta

    Ebrahimi, F.; Raman, R.

    2016-03-23

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form amore » narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. Furthermore, these results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.« less

  15. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    NASA Astrophysics Data System (ADS)

    Ebrahimi, F.; Raman, R.

    2016-04-01

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. These results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  16. Simulation of large systems with neural networks

    SciTech Connect

    Paez, T.L.

    1994-09-01

    Artificial neural networks (ANNs) have been shown capable of simulating the behavior of complex, nonlinear, systems, including structural systems. Under certain circumstances, it is desirable to simulate structures that are analyzed with the finite element method. For example, when we perform a probabilistic analysis with the Monte Carlo method, we usually perform numerous (hundreds or thousands of) repetitions of a response simulation with different input and system parameters to estimate the chance of specific response behaviors. In such applications, efficiency in computation of response is critical, and response simulation with ANNs can be valuable. However, finite element analyses of complex systems involve the use of models with tens or hundreds of thousands of degrees of freedom, and ANNs are practically limited to simulations that involve far fewer variables. This paper develops a technique for reducing the amount of information required to characterize the response of a general structure. We show how the reduced information can be used to train a recurrent ANN. Then the trained ANN can be used to simulate the reduced behavior of the original system, and the reduction transformation can be inverted to provide a simulation of the original system. A numerical example is presented.

  17. Description and characterization of a novel method for partial volume simulation in software breast phantoms.

    PubMed

    Chen, Feiyu; Bakic, Predrag R; Maidment, Andrew D A; Jensen, Shane T; Shi, Xiquan; Pokrajac, David D

    2015-10-01

    A modification to our previous simulation of breast anatomy is proposed to improve the quality of simulated x-ray projections images. The image quality is affected by the voxel size of the simulation. Large voxels can cause notable spatial quantization artifacts; small voxels extend the generation time and increase the memory requirements. An improvement in image quality is achievable without reducing voxel size by the simulation of partial volume averaging in which voxels containing more than one simulated tissue type are allowed. The linear x-ray attenuation coefficient of voxels is, thus, the sum of the linear attenuation coefficients weighted by the voxel subvolume occupied by each tissue type. A local planar approximation of the boundary surface is employed. In the two-material case, the partial volume in each voxel is computed by decomposition into up to four simple geometric shapes. In the three-material case, by application of the Gauss-Ostrogradsky theorem, the 3D partial volume problem is converted into one of a few simpler 2D surface area problems. We illustrate the benefits of the proposed methodology on simulated x-ray projections. An efficient encoding scheme is proposed for the type and proportion of simulated tissues in each voxel. Monte Carlo simulation was used to evaluate the quantitative error of our approximation algorithms. PMID:25910056

  18. Fluorescence volume imaging with an axicon: simulation study based on scalar diffraction method.

    PubMed

    Zheng, Juanjuan; Yang, Yanlong; Lei, Ming; Yao, Baoli; Gao, Peng; Ye, Tong

    2012-10-20

    In a two-photon excitation fluorescence volume imaging (TPFVI) system, an axicon is used to generate a Bessel beam and at the same time to collect the generated fluorescence to achieve large depth of field. A slice-by-slice diffraction propagation model in the frame of the angular spectrum method is proposed to simulate the whole imaging process of TPFVI. The simulation reveals that the Bessel beam can penetrate deep in scattering media due to its self-reconstruction ability. The simulation also demonstrates that TPFVI can image a volume of interest in a single raster scan. Two-photon excitation is crucial to eliminate the signals that are generated by the side lobes of Bessel beams; the unwanted signals may be further suppressed by placing a spatial filter in the front of the detector. The simulation method will guide the system design in improving the performance of a TPFVI system. PMID:23089777

  19. Large-scale mass distribution in the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Haider, M.; Steinhauser, D.; Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Hernquist, L.

    2016-04-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 per cent of the dark matter and 23 per cent of the baryons are within haloes more massive than the resolution limit of 2 × 108 M⊙. The filaments of the cosmic web host a further 45 per cent of the dark matter and 46 per cent of the baryons. The remaining 31 per cent of the baryons reside in voids. The majority of these baryons have been transported there through active galactic nuclei feedback. We note that the feedback model of Illustris is too strong for heavy haloes, therefore it is likely that we are overestimating this amount. Categorizing the baryons according to their density and temperature, we find that 17.8 per cent of them are in a condensed state, 21.6 per cent are present as cold, diffuse gas, and 53.9 per cent are found in the state of a warm-hot intergalactic medium.

  20. Aeronautical facilities catalogue. Volume 2: Airbreathing propulsion and flight simulators

    NASA Technical Reports Server (NTRS)

    Penaranda, F. E.; Freda, M. S.

    1985-01-01

    Volume two of the facilities catalogue deals with Airbreathing Propulsion and Flight Simulation Facilities. Data pertinent to managers and engineers are presented. Each facility is described on a data sheet that shows the facility's technical parameters on a chart and more detailed information in narratives. Facilities judged comparable in testing capability are noted and grouped together. Several comprehensive cross-indexes and charts are included.

  1. Surface identification, meshing and analysis during large molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Dupuy, Laurent M.; Rudd, Robert E.

    2006-03-01

    Techniques are presented for the identification and analysis of surfaces and interfaces in atomistic simulations of solids. Atomistic and other particle-based simulations have no inherent notion of a surface, only atomic positions and interactions. The algorithms we develop here provide an unambiguous means to determine which atoms constitute the surface, and the list of surface atoms and a tessellation (meshing) of the surface are determined simultaneously. The tessellation is then used to calculate various surface integrals such as volume, area and shape (multiple moment). The principle of surface identification and tessellation is closely related to that used in the generation of the r-reduced surface, a step in the visualization of molecular surfaces used in biology. The algorithms have been implemented and demonstrated to run automatically (on the fly) in a large-scale parallel molecular dynamics (MD) code on a supercomputer. We demonstrate the validity of the method in three applications in which the surfaces and interfaces evolve: void surfaces in ductile fracture, the surface morphology due to significant plastic deformation of a nanoscale metal plate, and the interfaces (grain boundaries) and void surfaces in a nanoscale polycrystalline system undergoing ductile failure. The technique is found to be quite robust, even when the topology of the surfaces changes as in the case of void coalescence where two surfaces merge into one. It is found to add negligible computational overhead to an MD code.

  2. The UPSCALE project: a large simulation campaign

    NASA Astrophysics Data System (ADS)

    Mizielinski, Matthew; Roberts, Malcolm; Vidale, Pier Luigi; Schiemann, Reinhard; Demory, Marie-Estelle; Strachan, Jane

    2014-05-01

    The development of a traceable hierarchy of HadGEM3 global climate models, based upon the Met Office Unified Model, at resolutions from 135 km to 25 km, now allows the impact of resolution on the mean state, variability and extremes of climate to be studied in a robust fashion. In 2011 we successfully obtained a single-year grant of 144 million core hours of supercomputing time from the PRACE organization to run ensembles of 27 year atmosphere-only (HadGEM3-A GA3.0) climate simulations at 25km resolution, as used in present global weather forecasting, on HERMIT at HLRS. Through 2012 the UPSCALE project (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) ran over 650 years of simulation at resolutions of 25 km (N512), 60 km (N216) and 135 km (N96) to look at the value of high resolution climate models in the study of both present climate and a potential future climate scenario based on RCP8.5. Over 400 TB of data was produced using HERMIT, with additional simulations run on HECToR (UK supercomputer) and MONSooN (Met Office NERC Supercomputing Node). The data generated was transferred to the JASMIN super-data cluster, hosted by STFC CEDA in the UK, where analysis facilities are allowing rapid scientific exploitation of the data set. Many groups across the UK and Europe are already taking advantage of these facilities and we welcome approaches from other interested scientists. This presentation will briefly cover the following points; Purpose and requirements of the UPSCALE project and facilities used. Technical implementation and hurdles (model porting and optimisation, automation, numerical failures, data transfer). Ensemble specification. Current analysis projects and access to the data set. A full description of UPSCALE and the data set generated has been submitted to Geoscientific Model development, with overview information available from http://proj.badc.rl.ac.uk/upscale .

  3. Entropic effects in large-scale Monte Carlo simulations.

    PubMed

    Predescu, Cristian

    2007-07-01

    The efficiency of Monte Carlo samplers is dictated not only by energetic effects, such as large barriers, but also by entropic effects that are due to the sheer volume that is sampled. The latter effects appear in the form of an entropic mismatch or divergence between the direct and reverse trial moves. We provide lower and upper bounds for the average acceptance probability in terms of the Rényi divergence of order 1/2 . We show that the asymptotic finitude of the entropic divergence is the necessary and sufficient condition for nonvanishing acceptance probabilities in the limit of large dimension. Furthermore, we demonstrate that the upper bound is reasonably tight by showing that the exponent is asymptotically exact for systems made up of a large number of independent and identically distributed subsystems. For the last statement, we provide an alternative proof that relies on the reformulation of the acceptance probability as a large deviation problem. The reformulation also leads to a class of low-variance estimators for strongly asymmetric distributions. We show that the entropy divergence causes a decay in the average displacements with the number of dimensions n that are simultaneously updated. For systems that have a well-defined thermodynamic limit, the decay is demonstrated to be n(-1/2) for random-walk Monte Carlo and n(-1/6) for smart Monte Carlo (SMC). Numerical simulations of the Lennard-Jones 38 (LJ(38)) cluster show that SMC is virtually as efficient as the Markov chain implementation of the Gibbs sampler, which is normally utilized for Lennard-Jones clusters. An application of the entropic inequalities to the parallel tempering method demonstrates that the number of replicas increases as the square root of the heat capacity of the system. PMID:17677591

  4. Simulating stochastic dynamics using large time steps.

    PubMed

    Corradini, O; Faccioli, P; Orland, H

    2009-12-01

    We present an approach to investigate the long-time stochastic dynamics of multidimensional classical systems, in contact with a heat bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short- and long-time scales and both molecular dynamics or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time-resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor approximately 100 larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape. PMID:20365123

  5. Resonant RF network antennas for large-area and large-volume inductively coupled plasma sources

    NASA Astrophysics Data System (ADS)

    Hollenstein, Ch; Guittienne, Ph; Howling, A. A.

    2013-10-01

    Large-area and large-volume radio frequency (RF) plasmas are produced by different arrangements of an elementary electrical mesh consisting of two conductors interconnected by a capacitor at each end. The obtained cylindrical and planar RF networks are resonant and generate very high RF currents. The input impedance of such RF networks shows the behaviour of an RLC parallel resonance equivalent circuit. The real impedance at the resonance frequency is of great advantage for power matching compared with conventional inductive devices. Changes in the RLC equivalent circuit during the observed E-H transition will allow future interpretation of the plasma-antenna coupling. Furthermore, high power transfer efficiencies are found during inductively coupled plasma (ICP) operation. For the planar RF antenna network it is shown that the E-H transition occurs simultaneously over the entire antenna. The underlying physics of these discharges induced by the resonant RF network antenna is found to be identical to that of the conventional ICP devices described in the literature. The resonant RF network antenna is a new versatile plasma source, which can be adapted to applications in industry and research.

  6. The 1980 Large space systems technology. Volume 2: Base technology

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    Technology pertinent to large antenna systems, technology related to large space platform systems, and base technology applicable to both antenna and platform systems are discussed. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. A total systems approach including controls, platforms, and antennas is presented as a cohesive, programmatic plan for large space systems.

  7. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.

  8. An Ultrascalable Solution to Large-scale Neural Tissue Simulation

    PubMed Central

    Kozloski, James; Wagner, John

    2011-01-01

    Neural tissue simulation extends requirements and constraints of previous neuronal and neural circuit simulation methods, creating a tissue coordinate system. We have developed a novel tissue volume decomposition, and a hybrid branched cable equation solver. The decomposition divides the simulation into regular tissue blocks and distributes them on a parallel multithreaded machine. The solver computes neurons that have been divided arbitrarily across blocks. We demonstrate thread, strong, and weak scaling of our approach on a machine with more than 4000 nodes and up to four threads per node. Scaling synapses to physiological numbers had little effect on performance, since our decomposition approach generates synapses that are almost always computed locally. The largest simulation included in our scaling results comprised 1 million neurons, 1 billion compartments, and 10 billion conductance-based synapses and gap junctions. We discuss the implications of our ultrascalable Neural Tissue Simulator, and with our results estimate requirements for a simulation at the scale of a human brain. PMID:21954383

  9. An Ultrascalable Solution to Large-scale Neural Tissue Simulation.

    PubMed

    Kozloski, James; Wagner, John

    2011-01-01

    Neural tissue simulation extends requirements and constraints of previous neuronal and neural circuit simulation methods, creating a tissue coordinate system. We have developed a novel tissue volume decomposition, and a hybrid branched cable equation solver. The decomposition divides the simulation into regular tissue blocks and distributes them on a parallel multithreaded machine. The solver computes neurons that have been divided arbitrarily across blocks. We demonstrate thread, strong, and weak scaling of our approach on a machine with more than 4000 nodes and up to four threads per node. Scaling synapses to physiological numbers had little effect on performance, since our decomposition approach generates synapses that are almost always computed locally. The largest simulation included in our scaling results comprised 1 million neurons, 1 billion compartments, and 10 billion conductance-based synapses and gap junctions. We discuss the implications of our ultrascalable Neural Tissue Simulator, and with our results estimate requirements for a simulation at the scale of a human brain. PMID:21954383

  10. Large Eddy Simulation of a Sooting Jet Diffusion Flame

    NASA Astrophysics Data System (ADS)

    Blanquart, Guillaume; Pitsch, Heinz

    2007-11-01

    The understanding of soot particle dynamics in combustion systems is a key issue in the development of low emission engines. Of particular importance are the processes shaping the soot particle size distribution function (PSDF). However, it is not always necessary to represent exactly the full distribution, and often information about its moments only is sufficient. The Direct Quadrature Method of Moments (DQMOM) allows for an efficient and accurate prediction of the moments of the soot PSDF. This method has been validated for laminar premixed and diffusion flames with detailed chemistry and is now implemented in a semi-implicit low Mach-number Navier-Stokes solver. A Large Eddy Simulation (LES) of a piloted sooting jet diffusion flame (Delft flame) is performed to study the dynamics of soot particles in a turbulent environment. The profiles of temperature and major species are compared with the experimental measurements. Soot volume fraction profiles are compared with the recent data of Qamar et al. (2007). Aggregate properties such as the diameter and the fractal shape are studied in the scope of DQMOM.

  11. Large Eddy Simulation of Crashback in Marine Propulsors

    NASA Astrophysics Data System (ADS)

    Jang, Hyunchul

    Crashback is an operating condition to quickly stop a propelled vehicle, where the propeller is rotated in the reverse direction to yield negative thrust. The crashback condition is dominated by the interaction of the free stream flow with the strong reverse flow. This interaction forms a highly unsteady vortex ring, which is a very prominent feature of crashback. Crashback causes highly unsteady loads and flow separation on the blade surface. The unsteady loads can cause propulsor blade damage, and also affect vehicle maneuverability. Crashback is therefore well known as one of the most challenging propeller states to analyze. This dissertation uses Large-Eddy Simulation (LES) to predict the highly unsteady flow field in crashback. A non-dissipative and robust finite volume method developed by Mahesh et al. (2004) for unstructured grids is applied to flow around marine propulsors. The LES equations are written in a rotating frame of reference. The objectives of this dissertation are: (1) to understand the flow physics of crashback in marine propulsors with and without a duct, (2) to develop a finite volume method for highly skewed meshes which usually occur in complex propulsor geometries, and (3) to develop a sliding interface method for simulations of rotor-stator propulsor on parallel platforms. LES is performed for an open propulsor in crashback and validated against experiments performed by Jessup et al. (2004). The LES results show good agreement with experiments. Effective pressures for thrust and side-force are introduced to more clearly understand the physical sources of thrust and side-force. Both thrust and side-force are seen to be mainly generated from the leading edge of the suction side of the propeller. This implies that thrust and side-force have the same source---the highly unsteady leading edge separation. Conditional averaging is performed to obtain quantitative information about the complex flow physics of high- or low-amplitude events. The

  12. Performance of large electron energy filter in large volume plasma device

    SciTech Connect

    Singh, S. K.; Srivastava, P. K.; Awasthi, L. M.; Mattoo, S. K.; Sanyasi, A. K.; Kaw, P. K.; Singh, R.

    2014-03-15

    This paper describes an in-house designed large Electron Energy Filter (EEF) utilized in the Large Volume Plasma Device (LVPD) [S. K. Mattoo, V. P. Anita, L. M. Awasthi, and G. Ravi, Rev. Sci. Instrum. 72, 3864 (2001)] to secure objectives of (a) removing the presence of remnant primary ionizing energetic electrons and the non-thermal electrons, (b) introducing a radial gradient in plasma electron temperature without greatly affecting the radial profile of plasma density, and (c) providing a control on the scale length of gradient in electron temperature. A set of 19 independent coils of EEF make a variable aspect ratio, rectangular solenoid producing a magnetic field (B{sub x}) of 100 G along its axis and transverse to the ambient axial field (B{sub z} ∼ 6.2 G) of LVPD, when all its coils are used. Outside the EEF, magnetic field reduces rapidly to 1 G at a distance of 20 cm from the center of the solenoid on either side of target and source plasma. The EEF divides LVPD plasma into three distinct regions of source, EEF and target plasma. We report that the target plasma (n{sub e} ∼ 2 × 10{sup 11} cm{sup −3} and T{sub e} ∼ 2 eV) has no detectable energetic electrons and the radial gradients in its electron temperature can be established with scale length between 50 and 600 cm by controlling EEF magnetic field. Our observations reveal that the role of the EEF magnetic field is manifested by the energy dependence of transverse electron transport and enhanced transport caused by the plasma turbulence in the EEF plasma.

  13. WEST-3 wind turbine simulator development: Volume 3, Software

    SciTech Connect

    Hoffman, J.A.; Sridhar, S.

    1985-07-01

    This report deals with the software developed for WEST-3, a new, all digital, and fully programmable wind turbine simulator developed by Paragon Pacific Inc. The process of wind turbine simulation on WEST-3 is described in detail. The major steps are, the processing of the mathematical models, the preparation of the constant data, and the use of system software to generate executable code for running on WEST-3. The mechanics of reformulation, normalization, and scaling of the mathematical models is discussed in detail, in particular, the significance of reformulation which leads to accurate simulations. Descriptions for the preprocessor computer programs which are used to prepare the constant data needed in the simulation are given. These programs, in addition to scaling and normalizing all the constants, relieve the user from having to generate a large number of constants used in the simulation. Also given in the report are brief descriptions of the components of the WEST-3 system software: Translator, Assembler, Linker, and Loader. Also included in the report are: details of the aeroelastic rotor analysis, which is the center piece of a wind turbine simulation model; analysis of the gimbal subsystem; and listings of the variables, constants, and equations used in the simulation.

  14. Large-Eddy Simulation of Wind-Plant Aerodynamics

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.

  15. Climate Simulations with an Isentropic Finite Volume Dynamical Core

    SciTech Connect

    Chen, Chih-Chieh; Rasch, Philip J.

    2012-04-15

    This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite volume dynamical core of the Community Atmosphere Model (CAM). Results from a 20-year climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM which both use a pressure-based ({sigma}-p) coordinate. The same physical parameterization package is employed in all three dynamical cores. The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heat transport processes in the atmosphere leading to much improved atmospheric temperatures. We show that the isentropic model is very effective in reducing the long standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea-level pressure field in the northern hemisphere.

  16. Numerical simulation of the decay of swirling flow in a constant volume engine simulator

    SciTech Connect

    Cloutman, L.D.

    1986-05-01

    The KIVA and COYOTE computer programs were used to simulate the decay of turbulent swirling flow in a constant-volume combustion bomb. The results are in satisfactory agreement with the measurement of both swirl velocity and temperature. Predictions of secondary flows and suggestions for future research also are presented. 14 refs., 15 figs.

  17. Numerical simulation of the decay of swirling flow in a constant volume engine simulator

    NASA Astrophysics Data System (ADS)

    Cloutman, Lawrence D.

    1986-05-01

    The KIVA and COYOTE computer programs were used to simulate the decay of turbulent swirling flow in a constant-volume combustion bomb. The results are in satisfactory agreement with the measurement of both swirl velocity and temperature. Predictions of secondary flows and suggestions for future research also are presented.

  18. Development of large volume double ring penning plasma discharge source for efficient light emissions

    SciTech Connect

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj; Prajapati, Jitendra; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana

    2012-12-15

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd{sub 2}Fe{sub 14}B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic field of {approx}0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is {approx}2 Multiplication-Sign 10{sup 11} cm{sup -3}, which is around one order of magnitude larger than that of single ring arrangement.

  19. Testbed for large volume surveillance through distributed fusion and resource management

    NASA Astrophysics Data System (ADS)

    Valin, Pierre; Guitouni, Adel; Bossé, Éloi; Wehn, Hans; Yates, Richard; Zwick, Harold

    2007-04-01

    DRDC Valcartier has initiated, through a PRECARN partnership project, the development of an advanced simulation testbed for the evaluation of the effectiveness of Network Enabled Operations in a coastal large volume surveillance situation. The main focus of this testbed is to study concepts like distributed information fusion, dynamic resources and networks configuration management, and self synchronising units and agents. This article presents the requirements, design and first implementation builds, and reports on some preliminary results. The testbed allows to model distributed nodes performing information fusion, dynamic resource management planning and scheduling, as well as configuration management, given multiple constraints on the resources and their communications networks. Two situations are simulated: cooperative and non-cooperative target search. A cooperative surface target behaves in ways to be detected (and rescued), while an elusive target attempts to avoid detection. The current simulation consists of a networked set of surveillance assets including aircraft (UAVs, helicopters, maritime patrol aircraft), and ships. These assets have electrooptical and infrared sensors, scanning and imaging radar capabilities. Since full data sharing over datalinks is not feasible, own-platform data fusion must be simulated to evaluate implementation and performance of distributed information fusion. A special emphasis is put on higher-level fusion concepts using knowledge-based rules, with level 1 fusion already providing tracks. Surveillance platform behavior is also simulated in order to evaluate different dynamic resource management algorithms. Additionally, communication networks are modeled to simulate different information exchange concepts. The testbed allows the evaluation of a range of control strategies from independent platform search, through various levels of platform collaboration, up to a centralized control of search platforms.

  20. Large volume liquid helium relief device verifacation apparatus for the alpha magnetic spectrometer

    NASA Astrophysics Data System (ADS)

    Klimas, Richard John; McIntyre, P.; Colvin, John; Zeigler, John; Van Sciver, Steven; Ting, Samual

    2012-06-01

    Here we present details of an experiment for verifying the liquid helium vessel relief device for the Alpha Magnetic Spectrometer-02 (AMS-02). The relief device utilizes a series of rupture discs designed to open in the event of a vacuum failure of the AMS-02 cryogenic system. A failure of this type is classified to be a catastrophic loss of insulating vacuum accident. This apparatus differs from other approaches due to the size of the test volumes used. The verification apparatus consists of a 250 liter vessel used for the test quantity of liquid helium that is located inside a vacuum insulated vessel. A large diameter valve is suddenly opened to simulate the loss of insulating vacuum in a repeatable manner. Pressure and temperature vs. time data are presented and discussed in the context of the AMS-02 hardware configuration.

  1. Large space telescope, phase A. Volume 3: Optical telescope assembly

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the optical telescope assembly for the Large Space Telescope are discussed. The systems considerations are based on mission-related parameters and optical equipment requirements. Information is included on: (1) structural design and analysis, (2) thermal design, (3) stabilization and control, (4) alignment, focus, and figure control, (5) electronic subsystem, and (6) scientific instrument design.

  2. Large space telescope, phase A. Volume 4: Scientific instrument package

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design and characteristics of the scientific instrument package for the Large Space Telescope are discussed. The subjects include: (1) general scientific objectives, (2) package system analysis, (3) scientific instrumentation, (4) imaging photoelectric sensors, (5) environmental considerations, and (6) reliability and maintainability.

  3. RADON DIAGNOSTIC MEASUREMENT GUIDANCE FOR LARGE BUILDINGS - VOLUME 2. APPENDICES

    EPA Science Inventory

    The report discusses the development of radon diagnostic procedures and mitigation strategies applicable to a variety of large non-residential buildings commonly found in Florida. The investigations document and evaluate the nature of radon occurrence and entry mechanisms for rad...

  4. Large space telescope, phase A. Volume 5: Support systems module

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the support systems module for the Large Space Telescope are discussed. The following systems and described: (1) thermal control, (2) electrical, (3) communication and data landing, (4) attitude control system, and (5) structural features. Analyses of maintainability and reliability considerations are included.

  5. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  6. Real-time visualization of large volume datasets on standard PC hardware.

    PubMed

    Xie, Kai; Yang, Jie; Zhu, Y M

    2008-05-01

    In medical area, interactive three-dimensional volume visualization of large volume datasets is a challenging task. One of the major challenges in graphics processing unit (GPU)-based volume rendering algorithms is the limited size of texture memory imposed by current GPU architecture. We attempt to overcome this limitation by rendering only visible parts of large CT datasets. In this paper, we present an efficient, high-quality volume rendering algorithm using GPUs for rendering large CT datasets at interactive frame rates on standard PC hardware. We subdivide the volume dataset into uniform sized blocks and take advantage of combinations of early ray termination, empty-space skipping and visibility culling to accelerate the whole rendering process and render visible parts of volume data. We have implemented our volume rendering algorithm for a large volume data of 512 x 304 x 1878 dimensions (visible female), and achieved real-time performance (i.e., 3-4 frames per second) on a Pentium 4 2.4GHz PC equipped with NVIDIA Geforce 6600 graphics card ( 256 MB video memory). This method can be used as a 3D visualization tool of large CT datasets for doctors or radiologists. PMID:18243401

  7. Special Properties of Coherence Scanning Interferometers for large Measurement Volumes

    NASA Astrophysics Data System (ADS)

    Bauer, W.

    2011-08-01

    In contrast to many other optical methods the uncertainty of Coherence Scanning Interferometer (CSI) in vertical direction is independent from the field of view. Therefore CSIs are ideal instruments for measuring 3D-profiles of larger areas (36×28mm2, e.g.) with high precision. This is of advantage for the determination of form parameters like flatness, parallelism and steps heights within a short time. In addition, using a telecentric beam path allows measurements of deep lying surfaces (<70mm) and the determination of form parameters with large step-heights. The lateral and spatial resolution, however, are reduced. In this presentation different metrological characteristics together with their potential errors are analyzed for large-scale measuring CSIs. Therefore these instruments are ideal tools in quality control for good/bad selections, e.g. The consequences for the practical use in industry and for standardization are discussed by examples of workpieces of automotive suppliers or from the steel industry.

  8. Modelling and simulation of large solid state laser systems

    SciTech Connect

    Simmons, W.W.; Warren, W.E.

    1986-01-01

    The role of numerical methods to simulate the several physical processes (e.g., diffraction, self-focusing, gain saturation) that are involved in coherent beam propagation through large laser systems is discussed. A comprehensive simulation code for modeling the pertinent physical phenomena observed in laser operations (growth of small-scale modulation, spatial filter, imaging, gain saturation and beam-induced damage) is described in some detail. Comparisons between code results and solid state laser output performance data are presented. Design and performance estimation of the large Nova laser system at LLNL are given. Finally, a global design rule for large, solid state laser systems is discussed.

  9. Probing the Earth’s interior with a large-volume liquid scintillator detector

    NASA Astrophysics Data System (ADS)

    Hochmuth, Kathrin A.; Feilitzsch, Franz V.; Fields, Brian D.; Undagoitia, Teresa Marrodán; Oberauer, Lothar; Potzel, Walter; Raffelt, Georg G.; Wurm, Michael

    2007-02-01

    A future large-volume liquid scintillator detector would provide a high-statistics measurement of terrestrial antineutrinos originating from β-decays of the uranium and thorium chains. In addition, the forward displacement of the neutron in the detection reaction ν+p→n+e provides directional information. We investigate the requirements on such detectors to distinguish between certain geophysical models on the basis of the angular dependence of the geoneutrino flux. Our analysis is based on a Monte-Carlo simulation with different levels of light yield, considering both unloaded and gadolinium-loaded scintillators. We find that a 50 kt detector such as the proposed LENA (Low Energy Neutrino Astronomy) will detect deviations from isotropy of the geoneutrino flux significantly. However, with an unloaded scintillator the time needed for a useful discrimination between different geophysical models is too large if one uses the directional information alone. A Gd-loaded scintillator improves the situation considerably, although a 50 kt detector would still need several decades to distinguish between a geophysical reference model and one with a large neutrino source in the Earth’s core. However, a high-statistics measurement of the total geoneutrino flux and its spectrum still provides an extremely useful glance at the Earth’s interior.

  10. Evaluation of Large Volume SrI2(Eu) Scintillator Detectors

    SciTech Connect

    Sturm, B W; Cherepy, N J; Drury, O B; Thelin, P A; Fisher, S E; Magyar, A F; Payne, S A; Burger, A; Boatner, L A; Ramey, J O; Shah, K S; Hawrami, R

    2010-11-18

    There is an ever increasing demand for gamma-ray detectors which can achieve good energy resolution, high detection efficiency, and room-temperature operation. We are working to address each of these requirements through the development of large volume SrI{sub 2}(Eu) scintillator detectors. In this work, we have evaluated a variety of SrI{sub 2} crystals with volumes >10 cm{sup 3}. The goal of this research was to examine the causes of energy resolution degradation for larger detectors and to determine what can be done to mitigate these effects. Testing both packaged and unpackaged detectors, we have consistently achieved better resolution with the packaged detectors. Using a collimated gamma-ray source, it was determined that better energy resolution for the packaged detectors is correlated with better light collection uniformity. A number of packaged detectors were fabricated and tested and the best spectroscopic performance was achieved for a 3% Eu doped crystal with an energy resolution of 2.93% FWHM at 662keV. Simulations of SrI{sub 2}(Eu) crystals were also performed to better understand the light transport physics in scintillators and are reported. This study has important implications for the development of SrI{sub 2}(Eu) detectors for national security purposes.

  11. Large-volume protein crystal growth for neutron macromolecular crystallography

    SciTech Connect

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.

  12. Large-volume protein crystal growth for neutron macromolecular crystallography.

    PubMed

    Ng, Joseph D; Baird, James K; Coates, Leighton; Garcia-Ruiz, Juan M; Hodge, Teresa A; Huang, Sijay

    2015-04-01

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. These include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations. PMID:25849493

  13. Large-volume protein crystal growth for neutron macromolecular crystallography

    DOE PAGESBeta

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for themore » growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.« less

  14. New material model for simulating large impacts on rocky bodies

    NASA Astrophysics Data System (ADS)

    Tonge, A.; Barnouin, O.; Ramesh, K.

    2014-07-01

    Large impact craters on an asteroid can provide insights into its internal structure. These craters can expose material from the interior of the body at the impact site [e.g., 1]; additionally, the impact sends stress waves throughout the body, which interrogate the asteroid's interior. Through a complex interplay of processes, such impacts can result in a variety of motions, the consequence of which may appear as lineaments that are exposed over all or portions of the asteroid's surface [e.g., 2,3]. While analytic, scaling, and heuristic arguments can provide some insight into general phenomena on asteroids, interpreting the results of a specific impact event, or series of events, on a specific asteroid geometry generally necessitates the use of computational approaches that can solve for the stress and displacement history resulting from an impact event. These computational approaches require a constitutive model for the material, which relates the deformation history of a small material volume to the average force on the boundary of that material volume. In this work, we present a new material model that is suitable for simulating the failure of rocky materials during impact events. This material model is similar to the model discussed in [4]. The new material model incorporates dynamic sub-scale crack interactions through a micro-mechanics-based damage model, thermodynamic effects through the use of a Mie-Gruneisen equation of state, and granular flow of the fully damaged material. The granular flow model includes dilatation resulting from the mutual interaction of small fragments of material (grains) as they are forced to slide and roll over each other and includes a P-α type porosity model to account for compaction of the granular material in a subsequent impact event. The micro-mechanics-based damage model provides a direct connection between the flaw (crack) distribution in the material and the rate-dependent strength. By connecting the rate

  15. Earthquake Source Simulations: A Coupled Numerical Method and Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Ely, G. P.; Xin, Q.; Faerman, M.; Day, S.; Minster, B.; Kremenek, G.; Moore, R.

    2003-12-01

    We investigate a scheme for interfacing Finite-Difference (FD) and Finite-Element (FE) models in order to simulate dynamic earthquake rupture. The more powerful but slower FE method allows for (1) unusual geometries (e.g. dipping and curved faults), (2) nonlinear physics, and (3) finite displacements. These capabilities are computationally expensive and limit the useful size of the problem that can be solved. Large efficiencies are gained by employing FE only where necessary in the near source region and coupling this with an efficient FD solution for the surrounding medium. Coupling is achieved through setting up and an overlapping buffer zone between the domains modeled by the two methods. The buffer zone is handled numerically as a set of mutual offset boundary conditions. This scheme eliminates the effect of the artificial boundaries at the interface and allows energy to propagate in both directions across the boundary. In general it is necessary to interpolate variables between the meshes and time discretizations used for each model, and this can create artifacts that must be controlled. A modular approach has been used in which either of the two component codes can be substituted with another code. We have successfully demonstrated coupling for a simulation between a second-order FD rupture dynamics code and fourth-order staggered-grid FD code. To be useful earthquake source models must capture a large range of length and time scales, which is very computationally demanding. This requires that (for current computer technology) codes must utilize parallel processing. Additionally, if larges quantities of output data are to be saved, a high performance data management system is desirable. We show results from a large scale rupture dynamics simulation designed to test these capabilities. We use second-order FD with dimensions of 400 x 800 x 800 nodes, run for 3000 time steps. Data were saved for the entire volume for three components of velocity at every time

  16. Large eddy simulations of a forced semiconfined circular impinging jet

    NASA Astrophysics Data System (ADS)

    Olsson, M.; Fuchs, L.

    1998-02-01

    Large eddy simulations (LES) of a forced semiconfined circular impinging jet were carried out. The Reynolds number was 104 and the inflow was forced at a Strouhal number of 0.27. The separation between the jet inlet and the opposing wall was four jet inlet diameters. Four different simulations were made. Two simulations were performed without any explicit sub-grid-scale (SGS) model using 1283 and 963 grid points, respectively. Two simulations were performed with two different SGS-models using 963 grid points; one with a dynamic Smagorinsky based model and one with a stress-similarity model. The simulations were performed to study the mean velocity, the turbulence statistics, the SGS-model effects, the dynamic behavior of the jet with a focus on the near wall region. The existence of separation vortices in the wall jet region was confirmed. These secondary vortices were found to be related to the radially deflected primary vortices generated by the circular shear layer of the jet. It was also shown that the primary vortex structures that reach the wall were helical and not axisymmetric. A quantitative gain was found in the simulations with SGS-models. The stress-similarity model simulation correlated slightly better with the higher resolution simulation than the other coarse grid simulations. The variations in the results predicted by the different simulations were larger for the turbulence statistics than for the mean velocity. However, the variation among the different simulations in terms of the turbulence intensity was less than 10%.

  17. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  18. Large Eddy Simulation of Pollen Transport in the Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Chamecki, Marcelo; Meneveau, Charles; Parlange, Marc B.

    2007-11-01

    The development of genetically modified crops and questions about cross-pollination and contamination of natural plant populations enhanced the importance of understanding wind dispersion of airborne pollen. The main objective of this work is to simulate the dispersal of pollen grains in the atmospheric surface layer using large eddy simulation. Pollen concentrations are simulated by an advection-diffusion equation including gravitational settling. Of great importance is the specification of the bottom boundary conditions characterizing the pollen source over the canopy and the deposition process everywhere else. The velocity field is discretized using a pseudospectral approach. However the application of the same discretization scheme to the pollen equation generates unphysical solutions (i.e. negative concentrations). The finite-volume bounded scheme SMART is used for the pollen equation. A conservative interpolation scheme to determine the velocity field on the finite volume surfaces was developed. The implementation is validated against field experiments of point source and area field releases of pollen.

  19. Constrained Large Eddy Simulation of Separated Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Xia, Zhenhua; Shi, Yipeng; Wang, Jianchun; Xiao, Zuoli; Yang, Yantao; Chen, Shiyi

    2011-11-01

    Constrained Large-eddy Simulation (CLES) has been recently proposed to simulate turbulent flows with massive separation. Different from traditional large eddy simulation (LES) and hybrid RANS/LES approaches, the CLES simulates the whole flow domain by large eddy simulation while enforcing a RANS Reynolds stress constraint on the subgrid-scale (SGS) stress models in the near-wall region. Algebraic eddy-viscosity models and one-equation Spalart-Allmaras (S-A) model have been used to constrain the Reynolds stress. The CLES approach is validated a posteriori through simulation of flow past a circular cylinder and periodic hill flow at high Reynolds numbers. The simulation results are compared with those from RANS, DES, DDES and other available hybrid RANS/LES methods. It is shown that the capability of the CLES method in predicting separated flows is comparable to that of DES. Detailed discussions are also presented about the effects of the RANS models as constraint in the near-wall layers. Our results demonstrate that the CLES method is a promising alternative towards engineering applications.

  20. Sand tank experiment of a large volume biodiesel spill

    NASA Astrophysics Data System (ADS)

    Scully, K.; Mayer, K. U.

    2015-12-01

    Although petroleum hydrocarbon releases in the subsurface have been well studied, the impacts of subsurface releases of highly degradable alternative fuels, including biodiesel, are not as well understood. One concern is the generation of CH4­ which may lead to explosive conditions in underground structures. In addition, the biodegradation of biodiesel consumes O2 that would otherwise be available for the degradation of petroleum hydrocarbons that may be present at a site. Until now, biodiesel biodegradation in the vadose zone has not been examined in detail, despite being critical to understanding the full impact of a release. This research involves a detailed study of a laboratory release of 80 L of biodiesel applied at surface into a large sandtank to examine the progress of biodegradation reactions. The experiment will monitor the onset and temporal evolution of CH4 generation to provide guidance for site monitoring needs following a biodiesel release to the subsurface. Three CO2 and CH4 flux chambers have been deployed for long term monitoring of gas emissions. CO2 fluxes have increased in all chambers over the 126 days since the start of the experiment. The highest CO2 effluxes are found directly above the spill and have increased from < 0.5 μmol m-2 s-1 to ~3.8 μmol m-2 s-1, indicating an increase in microbial activity. There were no measurable CH4 fluxes 126 days into the experiment. Sensors were emplaced to continuously measure O2, CO2, moisture content, matric potential, EC, and temperature. In response to the release, CO2 levels have increased across all sensors, from an average value of 0.1% to 0.6% 126 days after the start of the experiment, indicating the rapid onset of biodegradation. The highest CO2 values observed from samples taken in the gas ports were 2.5%. Average O2 concentrations have decreased from 21% to 17% 126 days after the start of the experiment. O2 levels in the bottom central region of the sandtank declined to approximately 12%.

  1. New material model for simulating large impacts on rocky bodies

    NASA Astrophysics Data System (ADS)

    Tonge, A.; Barnouin, O.; Ramesh, K.

    2014-07-01

    Large impact craters on an asteroid can provide insights into its internal structure. These craters can expose material from the interior of the body at the impact site [e.g., 1]; additionally, the impact sends stress waves throughout the body, which interrogate the asteroid's interior. Through a complex interplay of processes, such impacts can result in a variety of motions, the consequence of which may appear as lineaments that are exposed over all or portions of the asteroid's surface [e.g., 2,3]. While analytic, scaling, and heuristic arguments can provide some insight into general phenomena on asteroids, interpreting the results of a specific impact event, or series of events, on a specific asteroid geometry generally necessitates the use of computational approaches that can solve for the stress and displacement history resulting from an impact event. These computational approaches require a constitutive model for the material, which relates the deformation history of a small material volume to the average force on the boundary of that material volume. In this work, we present a new material model that is suitable for simulating the failure of rocky materials during impact events. This material model is similar to the model discussed in [4]. The new material model incorporates dynamic sub-scale crack interactions through a micro-mechanics-based damage model, thermodynamic effects through the use of a Mie-Gruneisen equation of state, and granular flow of the fully damaged material. The granular flow model includes dilatation resulting from the mutual interaction of small fragments of material (grains) as they are forced to slide and roll over each other and includes a P-α type porosity model to account for compaction of the granular material in a subsequent impact event. The micro-mechanics-based damage model provides a direct connection between the flaw (crack) distribution in the material and the rate-dependent strength. By connecting the rate

  2. Kinetic MHD simulation of large 'circ; tearing mode

    NASA Astrophysics Data System (ADS)

    Cheng, Jianhua; Chen, Yang; Parker, Scott; Uzdensky, Dmitri

    2012-03-01

    We have developed a second-order accurate semi-implicit δ method for kinetic MHD simulation with Lorentz force ions and fluid electrons. The model has been used to study the resistive tearing mode instability, which involves multiple spatial scales. In small 'circ; cases, the linear growth rate and eigenmode structure are consistent with resistive MHD analysis. The Rutherford stage and saturation are demonstrated, but the simulation exhibits different saturation island widths compared with previous MHD simulations. In large 'circ; cases, nonlinear simulations show multiple islands forming, followed by the islands coalescing at later times. The competition between these two processes strongly influences the reconnection rates and eventually leads to a steady state reconnection. We will present various parameter studies and show that our hybrid results agree with fluid analysis in certain limits (e.g., relatively large resisitivities).

  3. Mathematical simulation of power conditioning systems. Volume 1: Simulation of elementary units. Report on simulation methodology

    NASA Technical Reports Server (NTRS)

    Prajous, R.; Mazankine, J.; Ippolito, J. C.

    1978-01-01

    Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.

  4. Controlled multibody dynamics simulation for large space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Wu, S. C.; Chang, C. W.

    1989-01-01

    Multibody dynamics discipline, and dynamic simulation in control structure interaction (CSI) design are discussed. The use, capabilities, and architecture of the Large Angle Transient Dynamics (LATDYN) code as a simulation tool are explained. A generic joint body with various types of hinge connections; finite element and element coordinate systems; results of a flexible beam spin-up on a plane; mini-mast deployment; space crane and robotic slewing manipulations; a potential CSI test article; and multibody benchmark experiments are also described.

  5. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  6. Applications of large eddy simulation methods to gyrokinetic turbulence

    SciTech Connect

    Bañón Navarro, A. Happel, T.; Teaca, B. [Applied Mathematics Research Centre, Coventry University, Coventry CV1 5FB; Max-Planck für Sonnensystemforschung, Max-Planck-Str. 2, D-37191 Katlenburg-Lindau; Max-Planck Jenko, F. [Max-Planck-Institut für Plasmaphysik, EURATOM Association, D-85748 Garching; Max-Planck Hammett, G. W. [Max-Planck Collaboration: ASDEX Upgrade Team

    2014-03-15

    The large eddy simulation (LES) approach—solving numerically the large scales of a turbulent system and accounting for the small-scale influence through a model—is applied to nonlinear gyrokinetic systems that are driven by a number of different microinstabilities. Comparisons between modeled, lower resolution, and higher resolution simulations are performed for an experimental measurable quantity, the electron density fluctuation spectrum. Moreover, the validation and applicability of LES is demonstrated through a series of diagnostics based on the free energetics of the system.

  7. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  8. Modeling and Dynamic Simulation of a Large Scale Helium Refrigerator

    NASA Astrophysics Data System (ADS)

    Lv, C.; Qiu, T. N.; Wu, J. H.; Xie, X. J.; Li, Q.

    In order to simulate the transient behaviors of a newly developed 2 kW helium refrigerator, a numerical model of the critical equipment including a screw compressor with variable-frequency drive, plate-fin heat exchangers, a turbine expander, and pneumatic valves wasdeveloped. In the simulation,the calculation of the helium thermodynamic properties arebased on 32-parameter modified Benedict-Webb-Rubin (MBWR) state equation.The start-up process of the warm compressor station with gas management subsystem, and the cool-down process of cold box in an actual operation, were dynamically simulated. The developed model was verified by comparing the simulated results with the experimental data.Besides, system responses of increasing heat load were simulated. This model can also be used to design and optimize other large scale helium refrigerators.

  9. Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise

    NASA Astrophysics Data System (ADS)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2014-11-01

    Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.

  10. Large Eddy Simulation of Multiple Turbulent Round Jets

    NASA Astrophysics Data System (ADS)

    Balajee, G. K.; Panchapakesan, Nagangudy

    2015-11-01

    Turbulent round jet flow was simulated as a large eddy simulation with OpenFoam software package for a jet Reynolds number of 11000. The intensity of the fluctuating motion in the incoming nozzle flow was adjusted so that the initial shear layer development compares well with available experimental data. The far field development of averages of higher order moments up to fourth order were compared with experiments. The agreement is good indicating that the large eddy motions were being computed satisfactorily by the simulation. Turbulent kinetic energy budget as well as the quality of the LES simulations were also evaluated. These conditions were then used to perform a multiple turbulent round jets simulation with the same initial momentum flux. The far field of the flow was compared with the single jet simulation and experiments to test approach to self similarity. The evolution of the higher order moments in the development region where the multiple jets interact were studied. We will also present FTLE fields computed from the simulation to educe structures and compare it with those educed by other scalar measures. Support of AR&DB CIFAAR, and VIRGO cluster at IIT Madras is gratefully acknowledged.

  11. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  12. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  13. Plasma volume losses during simulated weightlessness in women

    SciTech Connect

    Drew, H.; Fortney, S.; La France, N.; Wagner, H.N. Jr.

    1985-05-01

    Six healthy women not using oral contraceptives underwent two 11-day intervals of complete bedrest (BR) with the BR periods separated by 4 weeks of ambulatory control. Change in plasma volume (PV) was monitored during BR to test the hypothesis that these women would show a smaller decrease in PV than PV values reported in similarly stressed men due to the water retaining effects of the female hormones. Bedrest periods were timed to coincide with opposing stages of the menstrual cycle in each woman. The menstrual cycle was divided into 4 separate stages; early follicular, ovulatory, early luteal, and late luteal phases. The percent decrease of PV showed a consistent decrease for each who began BR while in stage 1, 3 or 4 of the menstrual cycle. However, the females who began in stage 2 showed a transient attenuation in PV loss. Overall, PV changes seen in women during BR were similar to those reported for men. The water-retaining effects of menstrual hormones were evident only during the high estrogen ovulatory stage. The authors conclude the protective effects of menstrual hormones on PV losses during simulated weightless conditions appear to be only small and transient.

  14. Simulation of SMC compression molding: Filling, curing, and volume changes

    SciTech Connect

    Hill, R.R. Jr.

    1992-01-01

    Sheet molding compound (SMC) is a composite material made from polyester resin, styrene, fiberglass reinforcement, and other additives. It is widely recognized that SMC is a good candidate for replacing sheet metals of automotive body exteriors because SMC is relatively inexpensive, has a high strength-to-density ratio, and has good corrosion resistance. The focus of this research was to develop computer models to simulate the important features of SMC compression molding (i.e., material flow, heat transfer, curing, material expansion, and shrinkage), and to characterize these features experimentally. A control volume/finite element approach was used to obtain the pressure and velocity fields and to compute the flow progression during compression mold filling. The energy equation and a kinetic model were solved simultaneously for the temperature and conversion profiles. A series of molding experiments was conducted to record the flow-front location and material temperature. Predictions obtained from the model were compared to experimental results which incorporated a non-isothermal temperature profile, and reasonable agreement was obtained.

  15. Science and engineering of large scale socio-technical simulations.

    SciTech Connect

    Barrett, C. L.; Eubank, S. G.; Marathe, M. V.; Mortveit, H. S.; Reidys, C. M.

    2001-01-01

    Computer simulation is a computational approach whereby global system properties are produced as dynamics by direct computation of interactions among representations of local system elements. A mathematical theory of simulation consists of an account of the formal properties of sequential evaluation and composition of interdependent local mappings. When certain local mappings and their interdependencies can be related to particular real world objects and interdependencies, it is common to compute the interactions to derive a symbolic model of the global system made up of the corresponding interdependent objects. The formal mathematical and computational account of the simulation provides a particular kind of theoretical explanation of the global system properties and, therefore, insight into how to engineer a complex system to exhibit those properties. This paper considers the methematical foundations and engineering princaples necessary for building large scale simulations of socio-technical systems. Examples of such systems are urban regional transportation systems, the national electrical power markets and grids, the world-wide Internet, vaccine design and deployment, theater war, etc. These systems are composed of large numbers of interacting human, physical and technological components. Some components adapt and learn, exhibit perception, interpretation, reasoning, deception, cooperation and noncooperation, and have economic motives as well as the usual physical properties of interaction. The systems themselves are large and the behavior of sociotechnical systems is tremendously complex. The state of affairs f o r these kinds of systems is characterized by very little satisfactory formal theory, a good decal of very specialized knowledge of subsystems, and a dependence on experience-based practitioners' art. However, these systems are vital and require policy, control, design, implementation and investment. Thus there is motivation to improve the ability to

  16. Computational fluid dynamics simulations of particle deposition in large-scale, multigenerational lung models.

    PubMed

    Walters, D Keith; Luke, William H

    2011-01-01

    Computational fluid dynamics (CFD) has emerged as a useful tool for the prediction of airflow and particle transport within the human lung airway. Several published studies have demonstrated the use of Eulerian finite-volume CFD simulations coupled with Lagrangian particle tracking methods to determine local and regional particle deposition rates in small subsections of the bronchopulmonary tree. However, the simulation of particle transport and deposition in large-scale models encompassing more than a few generations is less common, due in part to the sheer size and complexity of the human lung airway. Highly resolved, fully coupled flowfield solution and particle tracking in the entire lung, for example, is currently an intractable problem and will remain so for the foreseeable future. This paper adopts a previously reported methodology for simulating large-scale regions of the lung airway (Walters, D. K., and Luke, W. H., 2010, "A Method for Three-Dimensional Navier-Stokes Simulations of Large-Scale Regions of the Human Lung Airway," ASME J. Fluids Eng., 132(5), p. 051101), which was shown to produce results similar to fully resolved geometries using approximate, reduced geometry models. The methodology is extended here to particle transport and deposition simulations. Lagrangian particle tracking simulations are performed in combination with Eulerian simulations of the airflow in an idealized representation of the human lung airway tree. Results using the reduced models are compared with those using the fully resolved models for an eight-generation region of the conducting zone. The agreement between fully resolved and reduced geometry simulations indicates that the new method can provide an accurate alternative for large-scale CFD simulations while potentially reducing the computational cost of these simulations by several orders of magnitude. PMID:21186893

  17. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  18. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, M.; Wieseman, C. D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  19. Toward the large-eddy simulation of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Speziale, C. G.; Zang, T. A.

    1990-01-01

    New subgrid-scale models for the large-eddy simulation of compressible turbulent flows are developed and tested based on the Favre-filtered equations of motion for an ideal gas. A compressible generalization of the linear combination of the Smagorinsky model and scale-similarity model, in terms of Favre-filtered fields, is obtained for the subgrid-scale stress tensor. An analogous thermal linear combination model is also developed for the subgrid-scale heat flux vector. The two dimensionless constants associated with these subgrid-scale models are obtained by correlating with the results of direct numerical simulations of compressible isotropic turbulence performed on a 96(exp 3) grid using Fourier collocation methods. Extensive comparisons between the direct and modeled subgrid-scale fields are provided in order to validate the models. A large-eddy simulation of the decay of compressible isotropic turbulence (conducted on a coarse 32(exp 3) grid) is shown to yield results that are in excellent agreement with the fine grid direct simulation. Future applications of these compressible subgrid-scale models to the large-eddy simulation of more complex supersonic flows are discussed briefly.

  20. NASA's Large-Eddy Simulation Research for Jet Noise Applications

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2009-01-01

    Research into large-eddy simulation (LES) for application to jet noise is described. The LES efforts include in-house code development and application at NASA Glenn along with NASA Research Announcement sponsored work at Stanford University and Florida State University. Details of the computational methods used and sample results for jet flows are provided.

  1. Center-stabilized Yang-Mills Theory:Confinement and Large N Volume Independence

    SciTech Connect

    Unsal, Mithat; Yaffe, Laurence G.; /Washington U., Seattle

    2008-03-21

    We examine a double trace deformation of SU(N) Yang-Mills theory which, for large N and large volume, is equivalent to unmodified Yang-Mills theory up to O(1/N{sup 2}) corrections. In contrast to the unmodified theory, large N volume independence is valid in the deformed theory down to arbitrarily small volumes. The double trace deformation prevents the spontaneous breaking of center symmetry which would otherwise disrupt large N volume independence in small volumes. For small values of N, if the theory is formulated on R{sup 3} x S{sup 1} with a sufficiently small compactification size L, then an analytic treatment of the non-perturbative dynamics of the deformed theory is possible. In this regime, we show that the deformed Yang-Mills theory has a mass gap and exhibits linear confinement. Increasing the circumference L or number of colors N decreases the separation of scales on which the analytic treatment relies. However, there are no order parameters which distinguish the small and large radius regimes. Consequently, for small N the deformed theory provides a novel example of a locally four-dimensional pure gauge theory in which one has analytic control over confinement, while for large N it provides a simple fully reduced model for Yang-Mills theory. The construction is easily generalized to QCD and other QCD-like theories.

  2. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  3. Effect of Bra Use during Radiotherapy for Large-Breasted Women: Acute Toxicity and Treated Heart and Lung Volumes

    PubMed Central

    Keller, Lanea; Cohen, Randi; Sopka, Dennis M; Li, Tianyu; Li, Linna; Anderson, Penny R; Fowble, Barbara L.; Freedman, Gary M

    2012-01-01

    Purpose Large breast size presents special problems during radiation simulation, planning and patient treatment, including increased skin toxicity, in women undergoing breast-conserving surgery and radiotherapy (BCT). We report our experience using a bra during radiation in large-breasted women and its effect on acute toxicity and heart and lung dosimetry. Materials and methods From 2001 to 2006, 246 consecutive large-breasted women (bra size ≥ 38 and/or ≥ D cup) were treated with BCT using either 3D conformal (3D-CRT) or Intensity Modulated Radiation (IMRT). In 58 cases, at the physicians’ discretion, a custom-fit bra was used during simulation and treatment. Endpoints were acute radiation dermatitis, and dosimetric comparison of heart and lung volumes in a subgroup of 12 left-sided breast cancer patients planned with and without a bra. Results The majority of acute skin toxicities were grade 2 and were experienced by 90% of patients in a bra compared to 70% of patients not in a bra (p=0.003). On multivariate analysis significant predictors of grade 2/3 skin toxicity included 3D-CRT instead of IMRT (OR=3.9, 95% CI:1.8-8.5) and the use of a bra (OR=5.5, 95% CI:1.6-18.8). For left-sided patients, use of a bra was associated with a volume of heart in the treatment fields decreased by 63.4% (p=0.002), a volume of left lung decreased by 18.5% (p=0.25), and chest wall separation decreased by a mean of 1 cm (p=0.03). Conclusions The use of a bra to augment breast shape and position in large-breasted women is an alternative to prone positioning and associated with reduced chest wall separation and reduced heart volume within the treatment field. PMID:23459714

  4. Necessary conditions on Calabi-Yau manifolds for large volume vacua

    NASA Astrophysics Data System (ADS)

    Gray, James; He, Yang-Hui; Jejjala, Vishnu; Jurke, Benjamin; Nelson, Brent; Simón, Joan

    2012-11-01

    We describe an efficient, construction independent, algorithmic test to determine whether Calabi-Yau threefolds admit a structure compatible with the large volume moduli stabilization scenario of type IIB superstring theory. Using the algorithm, we scan complete intersection and toric hypersurface Calabi-Yau threefolds with 2≤h1,1≤4 and deduce that 418 among 4434 manifolds have a large volume limit with a single large four-cycle. We describe major extensions to this survey, which are currently underway.

  5. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  6. Large-eddy simulation of free-surface decaying turbulence with dynamic subgrid-scale models

    NASA Astrophysics Data System (ADS)

    Salvetti, M. V.; Zang, Y.; Street, R. L.; Banerjee, S.

    1997-08-01

    This paper describes large-eddy simulations of decaying turbulence in an open channel, using different dynamic subgrade-scale models, viz. the dynamic model of Germano et al. [Phys. Fluids A 3, 1790 (1991)] (DSM), the dynamic mixed model in Zang et al. [Phys. Fluids A 5, 3186 (1993)] (DMM), and the dynamic two-parameter model of Salvetti and Banerjee [Phys. Fluids 7, 2831 (1995)] (DTM). These models are incorporated in a finite-volume solver of the Navier-Stokes equations. A direct numerical simulation of this flow conducted by Pan and Banerjee [Phys. Fluids 7, 1649 (1995)] showed that near the free surface turbulence has a quasi-two-dimensional behavior. Moreover, the quasi-two-dimensional region increases in thickness with the decay time, although the structure remains three-dimensional in the central regions of the flow. The results of the large-eddy simulations show that both the DMM and the DTM are able to reproduce the features of the decay process observed in the direct simulation and to handle the anisotropic nature of the flow. Nevertheless, the addition of the second model coefficient in the DTM improves the agreement with the direct simulation. When the DSM is used, significant discrepancies are observed between the large-eddy and the direct simulations during the decay process at the free surface.

  7. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Astrophysics Data System (ADS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-03-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an

  8. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Technical Reports Server (NTRS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-01-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an

  9. A high resolution finite volume method for efficient parallel simulation of casting processes on unstructured meshes

    SciTech Connect

    Kothe, D.B.; Turner, J.A.; Mosso, S.J.; Ferrell, R.C.

    1997-03-01

    We discuss selected aspects of a new parallel three-dimensional (3-D) computational tool for the unstructured mesh simulation of Los Alamos National Laboratory (LANL) casting processes. This tool, known as {bold Telluride}, draws upon on robust, high resolution finite volume solutions of metal alloy mass, momentum, and enthalpy conservation equations to model the filling, cooling, and solidification of LANL castings. We briefly describe the current {bold Telluride} physical models and solution methods, then detail our parallelization strategy as implemented with Fortran 90 (F90). This strategy has yielded straightforward and efficient parallelization on distributed and shared memory architectures, aided in large part by new parallel libraries {bold JTpack9O} for Krylov-subspace iterative solution methods and {bold PGSLib} for efficient gather/scatter operations. We illustrate our methodology and current capabilities with source code examples and parallel efficiency results for a LANL casting simulation.

  10. Evaluation of Cloud, Grid and HPC resources for big volume and variety of RCM simulations

    NASA Astrophysics Data System (ADS)

    Blanco, Carlos; Cofino, Antonio S.; Fernández, Valvanuz; Fernández, Jesús

    2016-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Regional Climate Model (RCM) community. These paradigms are modifying the way how RCM applications are being executed. By using these technologies the number, variety and complexity of experiments and resources used by RCMs simulations are increasing substantially. But, although computational capacity is increasing, traditional apps and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to execute RCMs in Grid, Cloud and HPC resources and how to tackle them. For this purpose, WRF model will be used as well known representative application for RCM simulations. Grid and Cloud infrastructures provided by EGI's VOs (esr, earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. And as a solution to those challenges we will use the WRF4G framework, which provides a good framework to manage big volume and variety of computing resources for climate simulation experiments. This work is partially funded by "Programa de Personal Investigador en Formación Predoctoral" from Universidad de Cantabria, co-funded by the Regional Government of Cantabria.

  11. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  12. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  13. Domain nesting for multi-scale large eddy simulation

    NASA Astrophysics Data System (ADS)

    Fuka, Vladimir; Xie, Zheng-Tong

    2016-04-01

    The need to simulate city scale areas (O(10 km)) with high resolution within street canyons in certain areas of interests necessitates different grid resolutions in different part of the simulated area. General purpose computational fluid dynamics codes typically employ unstructured refined grids while mesoscale meteorological models more often employ nesting of computational domains. ELMM is a large eddy simulation model for the atmospheric boundary layer. It employs orthogonal uniform grids and for this reason domain nesting was chosen as the approach for simulations in multiple scales. Domains are implemented as sets of MPI processes which communicate with each other as in a normal non-nested run, but also with processes from another (outer/inner) domain. It should stressed that the duration of solution of time-steps in the outer and in the inner domain must be synchronized, so that the processes do not have to wait for the completion of their boundary conditions. This can achieved by assigning an appropriate number of CPUs to each domain, and to gain high efficiency. When nesting is applied for large eddy simulation, the inner domain receives inflow boundary conditions which lack turbulent motions not represented by the outer grid. ELMM remedies this by optional adding of turbulent fluctuations to the inflow using the efficient method of Xie and Castro (2008). The spatial scale of these fluctuations is in the subgrid-scale of the outer grid and their intensity will be estimated from the subgrid turbulent kinetic energy in the outer grid.

  14. Finecasting for renewable energy with large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Jonker, Harmen; Verzijlbergh, Remco

    2016-04-01

    We present results of a single, continuous Large-Eddy Simulation of actual weather conditions during the timespan of a full year, made possible through recent computational developments (Schalkwijk et al, MWR, 2015). The simulation is coupled to a regional weather model in order to provide an LES dataset that is representative of the daily weather of the year 2012 around Cabauw, the Netherlands. This location is chosen such that LES results can be compared with both the regional weather model and observations from the Cabauw observational supersite. The run was made possible by porting our Large-Eddy Simulation program to run completely on the GPU (Schalkwijk et al, BAMS, 2012). GPU adaptation allows us to reach much improved time-to-solution ratios (i.e. simulation speedup versus real time). As a result, one can perform runs with a much longer timespan than previously feasible. The dataset resulting from the LES run provides many avenues for further study. First, it can provide a more statistical approach to boundary-layer turbulence than the more common case-studies by simulating a diverse but representative set of situations, as well as the transition between situations. This has advantages in designing and evaluating parameterizations. In addition, we discuss the opportunities of high-resolution forecasts for the renewable energy sector, e.g. wind and solar energy production.

  15. Publicly Releasing a Large Simulation Dataset with NDS Labs

    NASA Astrophysics Data System (ADS)

    Goldbaum, Nathan

    2016-03-01

    Optimally, all publicly funded research should be accompanied by the tools, code, and data necessary to fully reproduce the analysis performed in journal articles describing the research. This ideal can be difficult to attain, particularly when dealing with large (>10 TB) simulation datasets. In this lightning talk, we describe the process of publicly releasing a large simulation dataset to accompany the submission of a journal article. The simulation was performed using Enzo, an open source, community-developed N-body/hydrodynamics code and was analyzed using a wide range of community- developed tools in the scientific Python ecosystem. Although the simulation was performed and analyzed using an ecosystem of sustainably developed tools, we enable sustainable science using our data by making it publicly available. Combining the data release with the NDS Labs infrastructure allows a substantial amount of added value, including web-based access to analysis and visualization using the yt analysis package through an IPython notebook interface. In addition, we are able to accompany the paper submission to the arXiv preprint server with links to the raw simulation data as well as interactive real-time data visualizations that readers can explore on their own or share with colleagues during journal club discussions. It is our hope that the value added by these services will substantially increase the impact and readership of the paper.

  16. Endoclips vs large or small-volume epinephrine in peptic ulcer recurrent bleeding

    PubMed Central

    Ljubicic, Neven; Budimir, Ivan; Biscanin, Alen; Nikolic, Marko; Supanc, Vladimir; Hrabar, Davor; Pavic, Tajana

    2012-01-01

    AIM: To compare the recurrent bleeding after endoscopic injection of different epinephrine volumes with hemoclips in patients with bleeding peptic ulcer. METHODS: Between January 2005 and December 2009, 150 patients with gastric or duodenal bleeding ulcer with major stigmata of hemorrhage and nonbleeding visible vessel in an ulcer bed (Forrest IIa) were included in the study. Patients were randomized to receive a small-volume epinephrine group (15 to 25 mL injection group; Group 1, n = 50), a large-volume epinephrine group (30 to 40 mL injection group; Group 2, n = 50) and a hemoclip group (Group 3, n = 50). The rate of recurrent bleeding, as the primary outcome, was compared between the groups of patients included in the study. Secondary outcomes compared between the groups were primary hemostasis rate, permanent hemostasis, need for emergency surgery, 30 d mortality, bleeding-related deaths, length of hospital stay and transfusion requirements. RESULTS: Initial hemostasis was obtained in all patients. The rate of early recurrent bleeding was 30% (15/50) in the small-volume epinephrine group (Group 1) and 16% (8/50) in the large-volume epinephrine group (Group 2) (P = 0.09). The rate of recurrent bleeding was 4% (2/50) in the hemoclip group (Group 3); the difference was statistically significant with regard to patients treated with either small-volume or large-volume epinephrine solution (P = 0.0005 and P = 0.045, respectively). Duration of hospital stay was significantly shorter among patients treated with hemoclips than among patients treated with epinephrine whereas there were no differences in transfusion requirement or even 30 d mortality between the groups. CONCLUSION: Endoclip is superior to both small and large volume injection of epinephrine in the prevention of recurrent bleeding in patients with peptic ulcer. PMID:22611315

  17. Toward large eddy simulation of turbulent flow over an airfoil

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon

    1993-01-01

    The flow field over an airfoil contains several distinct flow characteristics, e.g. laminar, transitional, turbulent boundary layer flow, flow separation, unstable free shear layers, and a wake. This diversity of flow regimes taxes the presently available Reynolds averaged turbulence models. Such models are generally tuned to predict a particular flow regime, and adjustments are necessary for the prediction of a different flow regime. Similar difficulties are likely to emerge when the large eddy simulation technique is applied with the widely used Smagorinsky model. This model has not been successful in correctly representing different turbulent flow fields with a single universal constant and has an incorrect near-wall behavior. Germano et al. (1991) and Ghosal, Lund & Moin have developed a new subgrid-scale model, the dynamic model, which is very promising in alleviating many of the persistent inadequacies of the Smagorinsky model: the model coefficient is computed dynamically as the calculation progresses rather than input a priori. The model has been remarkably successful in prediction of several turbulent and transitional flows. We plan to simulate turbulent flow over a '2D' airfoil using the large eddy simulation technique. Our primary objective is to assess the performance of the newly developed dynamic subgrid-scale model for computation of complex flows about aircraft components and to compare the results with those obtained using the Reynolds average approach and experiments. The present computation represents the first application of large eddy simulation to a flow of aeronautical interest and a key demonstration of the capabilities of the large eddy simulation technique.

  18. Commonalities and Contrasts in Location, Morphology and Emplacement of Large-volume Evolved Lava Flows

    NASA Astrophysics Data System (ADS)

    Domagall, A. S.; Gregg, T. K.

    2008-12-01

    Observations of active dacite domes and evolved (SiO2 wt.% >65) plinian-style eruptions are considered to reveal typical behaviors of Si-rich volcanic systems. However, despite lack of mention in modern volcanology textbooks, large-volume (>4 km3) evolved lava flows exist globally. These large- volume evolved lava flows have many characteristics in common regardless of location and precise tectonic setting: they are associated with other large-volume deposits (both lava flow units and ignimbrites); are commonly found with large silicic systems; regionally, they are associated with bimodal volcanism and eruption of these large-volume evolved flows does not generate a caldera. Large-volume evolved lava flows have low aspect ratios, tend to be uniform in thickness from the vent to the distal margins and abruptly decrease in thickness at the flow front where they may form enormous pahoehoe-like lobes. A lack of pyroclastic textures such as bubble wall shards, pumice fragments, broken phenocrysts and lithics is taken as evidence for their lava flow origin rather than an ignimbrite origin despite their high SiO2 contents. Presence of a pervasive basal breccia and lobate distal margins also suggest a lava flow emplacement origin, that only the most intensely rheomorphic ignimbrite could potentially mimic. Our own studies and those from the literature suggest high eruption temperatures and peralkaline chemistries may be responsible for producing unusually low viscosities to account for large lateral extents; emplacement via fissure vents and insulations of the flow may also be key in attaining great volumes.

  19. Refurbishment of the Jet Propulsion Laboratory's Large Space Simulator

    NASA Technical Reports Server (NTRS)

    Harrell, J.; Johnson, K.

    1993-01-01

    The JPL large space simulator has recently undergone a major refurbishment to restore and enhance its capabilities to provide high fidelity space simulation. The nearly completed refurbishment has included upgrading the vacuum pumping system by replacing old oil diffusion pumps with new cryogenic and turbomolecular pumps; modernizing the entire control system to utilize computerized, distributed control technology; replacing the Xenon arc lamp power supplies with new upgraded units; refinishing the primary collimating mirror; and replacing the existing integrating lens unit and the fused quartz penetration window.

  20. Large-eddy simulation of trans- and supercritical injection

    NASA Astrophysics Data System (ADS)

    Müller, H.; Niedermeier, C. A.; Jarczyk, M.; Pfitzner, M.; Hickel, S.; Adams, N. A.

    2016-07-01

    In a joint effort to develop a robust numerical tool for the simulation of injection, mixing, and combustion in liquid rocket engines at high pressure, a real-gas thermodynamics model has been implemented into two computational fluid dynamics (CFD) codes, the density-based INCA and a pressure-based version of OpenFOAM. As a part of the validation process, both codes have been used to perform large-eddy simulations (LES) of trans- and supercritical nitrogen injection. Despite the different code architecture and the different subgrid scale turbulence modeling strategy, both codes yield similar results. The agreement with the available experimental data is good.

  1. A large-volume microwave plasma source based on parallel rectangular waveguides at low pressures

    NASA Astrophysics Data System (ADS)

    Zhang, Qing; Zhang, Guixin; Wang, Shumin; Wang, Liming

    2011-02-01

    A large-volume microwave plasma with good stability, uniformity and high density is directly generated and sustained. A microwave cavity is assembled by upper and lower metal plates and two adjacently parallel rectangular waveguides with axial slots regularly positioned on their inner wide side. Microwave energy is coupled into the plasma chamber shaped by quartz glass to enclose the space of working gas at low pressures. The geometrical properties of the source and the existing modes of the electric field are determined and optimized by a numerical simulation without a plasma. The calculated field patterns are in agreement with the observed experimental results. Argon, helium, nitrogen and air are used to produce a plasma for pressures ranging from 1000 to 2000 Pa and microwave powers above 800 W. The electron density is measured with a Mach-Zehnder interferometer to be on the order of 1014 cm-3 and the electron temperature is obtained using atomic emission spectrometry to be in the range 2222-2264 K at a pressure of 2000 Pa at different microwave powers. It can be seen from the interferograms at different microwave powers that the distribution of the plasma electron density is stable and uniform.

  2. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  3. Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach

    NASA Technical Reports Server (NTRS)

    Menon, S.; Feiz, H.

    1990-01-01

    Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.

  4. Lifetime of metastable states in a Ginzburg-Landau system: Numerical simulations at large driving forces.

    PubMed

    Umantsev, A

    2016-04-01

    We developed a "brute-force" simulation method and conducted numerical "experiments" on homogeneous nucleation in an isotropic system at large driving forces (not small supersaturations) using the stochastic Ginzburg-Landau approach. Interactions in the system are described by the asymmetric (no external field), athermal (temperature-independent driving force), tangential (simple phase diagram) Hamiltonian, which has two independent "drivers" of the phase transition: supersaturation and thermal noise. We obtained the probability distribution function of the lifetime of the metastable state and analyzed its mean value as a function of the supersaturation, noise strength, and volume. We also proved the nucleation theorem in the mean-field approximation. The results allowed us to find the thermodynamic properties of the barrier state and conclude that at large driving forces the fluctuating volumes are not independent. PMID:27176373

  5. Large-eddy simulation of sand dune morphodynamics

    NASA Astrophysics Data System (ADS)

    Khosronejad, Ali; Sotiropoulos, Fotis; St. Anthony Falls Laboratory, University of Minnesota Team

    2015-11-01

    Sand dunes are natural features that form under complex interaction between turbulent flow and bed morphodynamics. We employ a fully-coupled 3D numerical model (Khosronejad and Sotiropoulos, 2014, Journal of Fluid Mechanics, 753:150-216) to perform high-resolution large-eddy simulations of turbulence and bed morphodynamics in a laboratory scale mobile-bed channel to investigate initiation, evolution and quasi-equilibrium of sand dunes (Venditti and Church, 2005, J. Geophysical Research, 110:F01009). We employ a curvilinear immersed boundary method along with convection-diffusion and bed-morphodynamics modules to simulate the suspended sediment and the bed-load transports respectively. The coupled simulation were carried out on a grid with more than 100 million grid nodes and simulated about 3 hours of physical time of dune evolution. The simulations provide the first complete description of sand dune formation and long-term evolution. The geometric characteristics of the simulated dunes are shown to be in excellent agreement with observed data obtained across a broad range of scales. This work was supported by NSF Grants EAR-0120914 (as part of the National Center for Earth-Surface Dynamics). Computational resources were provided by the University of Minnesota Supercomputing Institute.

  6. Process control of large-scale finite element simulation software

    SciTech Connect

    Spence, P.A.; Weingarten, L.I.; Schroder, K.; Tung, D.M.; Sheaffer, D.A.

    1996-02-01

    We have developed a methodology for coupling large-scale numerical codes with process control algorithms. Closed-loop simulations were demonstrated using the Sandia-developed finite element thermal code TACO and the commercially available finite element thermal-mechanical code ABAQUS. This new capability enables us to use computational simulations for designing and prototyping advanced process-control systems. By testing control algorithms on simulators before building and testing hardware, enormous time and cost savings can be realized. The need for a closed-loop simulation capability was demonstrated in a detailed design study of a rapid-thermal-processing reactor under development by CVC Products Inc. Using a thermal model of the RTP system as a surrogate for the actual hardware, we were able to generate response data needed for controller design. We then evaluated the performance of both the controller design and the hardware design by using the controller to drive the finite element model. The controlled simulations provided data on wafer temperature uniformity as a function of ramp rate, temperature sensor locations, and controller gain. This information, which is critical to reactor design, cannot be obtained from typical open-loop simulations.

  7. 3-D dynamic rupture simulations by a finite volume method

    NASA Astrophysics Data System (ADS)

    Benjemaa, M.; Glinsky-Olivier, N.; Cruz-Atienza, V. M.; Virieux, J.

    2009-07-01

    Dynamic rupture of a 3-D spontaneous crack of arbitrary shape is investigated using a finite volume (FV) approach. The full domain is decomposed in tetrahedra whereas the surface, on which the rupture takes place, is discretized with triangles that are faces of tetrahedra. First of all, the elastodynamic equations are described into a pseudo-conservative form for an easy application of the FV discretization. Explicit boundary conditions are given using criteria based on the conservation of discrete energy through the crack surface. Using a stress-threshold criterion, these conditions specify fluxes through those triangles that have suffered rupture. On these broken surfaces, stress follows a linear slip-weakening law, although other friction laws can be implemented. For The Problem Version 3 of the dynamic-rupture code verification exercise conducted by the SCEC/USGS, numerical solutions on a planar fault exhibit a very high convergence rate and are in good agreement with the reference one provided by a finite difference (FD) technique. For a non-planar fault of parabolic shape, numerical solutions agree satisfactorily well with those obtained with a semi-analytical boundary integral method in terms of shear stress amplitudes, stopping phases arrival times and stress overshoots. Differences between solutions are attributed to the low-order interpolation of the FV approach, whose results are particularly sensitive to the mesh regularity (structured/unstructured). We expect this method, which is well adapted for multiprocessor parallel computing, to be competitive with others for solving large scale dynamic ruptures scenarios of seismic sources in the near future.

  8. Large-eddy simulation using the finite element method

    SciTech Connect

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.; Kollmann, W.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulence modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.

  9. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    NASA Technical Reports Server (NTRS)

    Oefelein, Joseph C.; Garcia, Roberto (Technical Monitor)

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  10. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  11. Large Eddy Simulations of Severe Convection Induced Turbulence

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.

  12. Reconstructing a Large-Scale Population for Social Simulation

    NASA Astrophysics Data System (ADS)

    Fan, Zongchen; Meng, Rongqing; Ge, Yuanzheng; Qiu, Xiaogang

    The advent of social simulation has provided an opportunity to research on social systems. More and more researchers tend to describe the components of social systems in a more detailed level. Any simulation needs the support of population data to initialize and implement the simulation systems. However, it's impossible to get the data which provide full information about individuals and households. We propose a two-step method to reconstruct a large-scale population for a Chinese city according to Chinese culture. Firstly, a baseline population is generated through gathering individuals into households one by one; secondly, social relationships such as friendship are assigned to the baseline population. Through a case study, a population of 3,112,559 individuals gathered in 1,133,835 households is reconstructed for Urumqi city, and the results show that the generated data can respect the real data quite well. The generated data can be applied to support modeling of some social phenomenon.

  13. Microwave holography of large reflector antennas - Simulation algorithms

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1985-01-01

    The performance of large reflector antennas can be improved by identifying the location and amount of their surface distortions and correcting them. To determine the accuracy of the constructed surface profiles, simulation studies are used to incorporate both the effects of systematic and random distortions, particularly the effects of the displaced surface panels. In this paper, different simulation models are investigated, emphasizing a model based on the vector diffraction analysis of a curved reflector with displaced panels. The simulated far-field patterns are then used to reconstruct the location and amount of displacement of the surface panels by employing a fast Fourier transform/iterative procedure. The sensitivity of the microwave holography technique based on the number of far-field sampled points, level of distortions, polarizations, illumination tapers, etc., is also examined.

  14. Lightweight computational steering of very large scale molecular dynamics simulations

    SciTech Connect

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

  15. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, G.D.

    1998-11-24

    Microwave injection methods are disclosed for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant ``volume`` ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources. 5 figs.

  16. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, Gerald D.

    1998-01-01

    Microwave injection methods for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant "volume" ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources.

  17. Large-volume en-bloc staining for electron microscopy-based connectomics

    PubMed Central

    Hua, Yunfeng; Laserstein, Philip; Helmstaedter, Moritz

    2015-01-01

    Large-scale connectomics requires dense staining of neuronal tissue blocks for electron microscopy (EM). Here we report a large-volume dense en-bloc EM staining protocol that overcomes the staining gradients, which so far substantially limited the reconstructable volumes in three-dimensional (3D) EM. Our protocol provides densely reconstructable tissue blocks from mouse neocortex sized at least 1 mm in diameter. By relaxing the constraints on precise topographic sample targeting, it makes the correlated functional and structural analysis of neuronal circuits realistic. PMID:26235643

  18. A Framework for End to End Simulations of the Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Gibson, R. R.; Ahmad, Z.; Bankert, J.; Bard, D.; Connolly, A. J.; Chang, C.; Gilmore, K.; Grace, E.; Hannel, M.; Jernigan, J. G.; Jones, L.; Kahn, S. M.; Krughoff, K. S.; Lorenz, S.; Marshall, S.; Nagarajan, S.; Peterson, J. R.; Pizagno, J.; Rasmussen, A. P.; Shmakova, M.; Silvestri, N.; Todd, N.; Young, M.

    2011-07-01

    As observatories get bigger and more complicated to operate, risk mitigation techniques become increasingly important. Additionally, the size and complexity of data coming from the next generation of surveys will present enormous challenges in how we process, store, and analyze these data. End-to-end simulations of telescopes with the scope of LSST are essential to correct problems and verify science capabilities as early as possible. A simulator can also determine how defects and trade-offs in individual subsystems impact the overall design requirements. Here, we present the architecture, implementation, and results of the source simulation framework for the Large Synoptic Survey Telescope (LSST). The framework creates time-based realizations of astronomical objects and formats the output for use in many different survey contexts (i.e., image simulation, reference catalogs, calibration catalogs, and simulated science outputs). The simulations include Milky Way, cosmological, and solar system models as well as transient and variable objects. All model objects can be sampled with the LSST cadence from any operations simulator run. The result is a representative, full-sky simulation of LSST data that can be used to determine telescope performance, the feasibility of science goals, and strategies for processing LSST-scale data volumes.

  19. Simulation requirements for the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Soosaar, K.

    1984-01-01

    Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.

  20. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  1. Simulating subsurface heterogeneity improves large-scale water resources predictions

    NASA Astrophysics Data System (ADS)

    Hartmann, A. J.; Gleeson, T.; Wagener, T.; Wada, Y.

    2014-12-01

    Heterogeneity is abundant everywhere across the hydrosphere. It exists in the soil, the vadose zone and the groundwater. In large-scale hydrological models, subsurface heterogeneity is usually not considered. Instead average or representative values are chosen for each of the simulated grid cells, not incorporating any sub-grid variability. This may lead to unreliable predictions when the models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this study we use a novel, large-scale model that takes into account sub-grid heterogeneity for the simulation of groundwater recharge by using statistical distribution functions. We choose all regions over Europe that are comprised by carbonate rock (~35% of the total area) because the well understood dissolvability of carbonate rocks (karstification) allows for assessing the strength of subsurface heterogeneity. Applying the model with historic data and future climate projections we show that subsurface heterogeneity lowers the vulnerability of groundwater recharge on hydro-climatic extremes and future changes of climate. Comparing our simulations with the PCR-GLOBWB model we can quantify the deviations of simulations for different sub-regions in Europe.

  2. Large-eddy simulation of flow past a circular cylinder

    NASA Technical Reports Server (NTRS)

    Mittal, R.

    1995-01-01

    Some of the most challenging applications of large-eddy simulation are those in complex geometries where spectral methods are of limited use. For such applications more conventional methods such as finite difference or finite element have to be used. However, it has become clear in recent years that dissipative numerical schemes which are routinely used in viscous flow simulations are not good candidates for use in LES of turbulent flows. Except in cases where the flow is extremely well resolved, it has been found that upwind schemes tend to damp out a significant portion of the small scales that can be resolved on the grid. Furthermore, it has been found that even specially designed higher-order upwind schemes that have been used successfully in the direct numerical simulation of turbulent flows produce too much dissipation when used in conjunction with large-eddy simulation. The objective of the current study is to perform a LES of incompressible flow past a circular cylinder at a Reynolds number of 3900 using a solver which employs an energy-conservative second-order central difference scheme for spatial discretization and compare the results obtained with those of Beaudan & Moin (1994) and with the experiments in order to assess the performance of the central scheme for this relatively complex geometry.

  3. Accelerating large cardiac bidomain simulations by arnoldi preconditioning.

    PubMed

    Deo, Makarand; Bauer, Steffen; Plank, Gernot; Vigmond, Edward

    2006-01-01

    Bidomain simulations of cardiac systems often in volve solving large, sparse, linear systems of the form Ax=b. These simulations are computationally very expensive in terms of run time and memory requirements. Therefore, efficient solvers are essential to keep simulations tractable. In this paper, an efficient preconditioner for the conjugate gradient (CG) method based on system order reduction using the Arnoldi method (A-PCG) is explained. Large order systems generated during cardiac bidomain simulations using a finite element method formulation, are solved using the A-PCG method. Its performance is compared with incomplete LU (ILU) preconditioning. Results indicate that the A-PCG estimates an approximate solution considerably faster than the ILU, often within a single iteration. To reduce the computational demands in terms of memory and run time, the use of a cascaded preconditioner is suggested. The A-PCG can be applied to quickly obtain an approximate solution, subsequently a cheap iterative method such as successive overrelaxation (SOR) is applied to further refine the solution to arrive at a desired accuracy. The memory requirements are less than direct LU but more than ILU method. The proposed scheme is shown to yield significant speedups when solving time evolving systems. PMID:17946209

  4. Upscaling of elastic properties for large scale geomechanical simulations

    NASA Astrophysics Data System (ADS)

    Chalon, F.; Mainguy, M.; Longuemare, P.; Lemonnier, P.

    2004-09-01

    Large scale geomechanical simulations are being increasingly used to model the compaction of stress dependent reservoirs, predict the long term integrity of under-ground radioactive waste disposals, and analyse the viability of hot-dry rock geothermal sites. These large scale simulations require the definition of homogenous mechanical properties for each geomechanical cell whereas the rock properties are expected to vary at a smaller scale. Therefore, this paper proposes a new methodology that makes possible to define the equivalent mechanical properties of the geomechanical cells using the fine scale information given in the geological model. This methodology is implemented on a synthetic reservoir case and two upscaling procedures providing the effective elastic properties of the Hooke's law are tested. The first upscaling procedure is an analytical method for perfectly stratified rock mass, whereas the second procedure computes lower and upper bounds of the equivalent properties with no assumption on the small scale heterogeneity distribution. Both procedures are applied to one geomechanical cell extracted from the reservoir structure. The results show that the analytical and numerical upscaling procedures provide accurate estimations of the effective parameters. Furthermore, a large scale simulation using the homogenized properties of each geomechanical cell calculated with the analytical method demonstrates that the overall behaviour of the reservoir structure is well reproduced for two different loading cases. Copyright

  5. Toxicity Profile With a Large Prostate Volume After External Beam Radiotherapy for Localized Prostate Cancer

    SciTech Connect

    Pinkawa, Michael Fischedick, Karin; Asadpour, Branka; Gagel, Bernd; Piroth, Marc D.; Nussen, Sandra; Eble, Michael J.

    2008-01-01

    Purpose: To assess the impact of prostate volume on health-related quality of life (HRQOL) before and at different intervals after radiotherapy for prostate cancer. Methods and Materials: A group of 204 patients was surveyed prospectively before (Time A), at the last day (Time B), 2 months after (Time C), and 16 months (median) after (Time D) radiotherapy, with a validated questionnaire (Expanded Prostate Cancer Index Composite). The group was divided into subgroups with a small (11-43 cm{sup 3}) and a large (44-151 cm{sup 3}) prostate volume. Results: Patients with large prostates presented with lower urinary bother scores (median 79 vs. 89; p = 0.01) before treatment. Urinary function/bother scores for patients with large prostates decreased significantly compared to patients with small prostates due to irritative/obstructive symptoms only at Time B (pain with urination more than once daily in 48% vs. 18%; p < 0.01). Health-related quality of life did not differ significantly between both patient groups at Times C and D. In contrast to a large prostate, a small initial bladder volume (with associated higher dose-volume load) was predictive for lower urinary bother scores both in the acute and late phase; at Time B it predisposed for pollakiuria but not for pain. Patients with neoadjuvant hormonal therapy reached significantly lower HRQOL scores in several domains (affecting only incontinence in the urinary domain), despite a smaller prostate volume (34 cm{sup 3} vs. 47 cm{sup 3}; p < 0.01). Conclusions: Patients with a large prostate volume have a great risk of irritative/obstructive symptoms (particularly dysuria) in the acute radiotherapy phase. These symptoms recover rapidly and do not influence long-term HRQOL.

  6. Large eddy simulation and its implementation in the COMMIX code.

    SciTech Connect

    Sun, J.; Yu, D.-H.

    1999-02-15

    Large eddy simulation (LES) is a numerical simulation method for turbulent flows and is derived by spatial averaging of the Navier-Stokes equations. In contrast with the Reynolds-averaged Navier-Stokes equations (RANS) method, LES is capable of calculating transient turbulent flows with greater accuracy. Application of LES to differing flows has given very encouraging results, as reported in the literature. In recent years, a dynamic LES model that presented even better results was proposed and applied to several flows. This report reviews the LES method and its implementation in the COMMIX code, which was developed at Argonne National Laboratory. As an example of the application of LES, the flow around a square prism is simulated, and some numerical results are presented. These results include a three-dimensional simulation that uses a code developed by one of the authors at the University of Notre Dame, and a two-dimensional simulation that uses the COMMIX code. The numerical results are compared with experimental data from the literature and are found to be in very good agreement.

  7. Towards Large Eddy Simulation of gas turbine compressors

    NASA Astrophysics Data System (ADS)

    McMullan, W. A.; Page, G. J.

    2012-07-01

    With increasing computing power, Large Eddy Simulation could be a useful simulation tool for gas turbine axial compressor design. This paper outlines a series of simulations performed on compressor geometries, ranging from a Controlled Diffusion Cascade stator blade to the periodic sector of a stage in a 3.5 stage axial compressor. The simulation results show that LES may offer advantages over traditional RANS methods when off-design conditions are considered - flow regimes where RANS models often fail to converge. The time-dependent nature of LES permits the resolution of transient flow structures, and can elucidate new mechanisms of vorticity generation on blade surfaces. It is shown that accurate LES is heavily reliant on both the near-wall mesh fidelity and the ability of the imposed inflow condition to recreate the conditions found in the reference experiment. For components embedded in a compressor this requires the generation of turbulence fluctuations at the inlet plane. A recycling method is developed that improves the quality of the flow in a single stage calculation of an axial compressor, and indicates that future developments in both the recycling technique and computing power will bring simulations of axial compressors within reach of industry in the coming years.

  8. Large Dynamic Range Simulations of Galaxies Hosting Supermassive Black Holes

    NASA Astrophysics Data System (ADS)

    Levine, Robyn

    2011-08-01

    The co-evolution of supermassive black holes (SMBHs) and their host galaxies is a rich problem, spanning a large-dynamic range and depending on many physical processes. Simulating the transport of gas and angular momentum from super-galactic scales all the way down to the outer edge of the black hole's accretion disk requires sophisticated numerical techniques with extensive treatment of baryonic physics. We use a hydrodynamic adaptive mesh refinement simulation to follow the growth and evolution of a typical disk galaxy hosting an SMBH, in a cosmological context (covering a dynamical range of 10 million!). We have adopted a piecemeal approach, focusing our attention on the gas dynamics in the central few hundred parsecs of the simulated galaxy (with boundary conditions provided by the larger cosmological simulation), and beginning with a simplified picture (no mergers or feedback). In this scenario, we find that the circumnuclear disk remains marginally stable against catastrophic fragmentation, allowing stochastic fueling of gas into the vicinity of the SMBH. I will discuss the successes and the limitations of these simulations, and their future direction.

  9. Large Eddy Simulations using Lattice Boltzmann algorithms. Final report

    SciTech Connect

    Serling, J.D.

    1993-09-28

    This report contains the results of a study performed to implement eddy-viscosity models for Large-Eddy-Simulations (LES) into Lattice Boltzmann (LB) algorithms for simulating fluid flows. This implementation requires modification of the LB method of simulating the incompressible Navier-Stokes equations to allow simulation of the filtered Navier-Stokes equations with some subgrid model for the Reynolds stress term. We demonstrate that the LB method can indeed be used for LES by simply locally adjusting the value of the BGK relaxation time to obtain the desired eddy-viscosity. Thus, many forms of eddy-viscosity models including the standard Smagorinsky model or the Dynamic model may be implemented using LB algorithms. Since underresolved LB simulations often lead to instability, the LES model actually serves to stabilize the method. An alternative method of ensuring stability is presented which requires that entropy increase during the collision step of the LB method. Thus, an alternative collision operator is locally applied if the entropy becomes too low. This stable LB method then acts as an LES scheme that effectively introduces its own eddy viscosity to damp short wavelength oscillations.

  10. Rapid estimate of solid volume in large tuff cores using a gas pycnometer

    SciTech Connect

    Thies, C.; Geddis, A.M.; Guzman, A.G.

    1996-09-01

    A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1{degrees}C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm{sup 3} with less than 1% error.

  11. Optimization of the electric field distribution in a large volume tissue-equivalent proportional counter.

    PubMed

    Verma, P K; Waker, A J

    1992-10-01

    Large volume tissue-equivalent proportional counters are of interest in radiation protection metrology, as the sensitivity in terms of counts per unit absorbed dose in these devices increases as the square of the counter diameter. Conventional solutions to the problem of maintaining a uniform electric field within a counter result in sensitive volume to total volume ratios which are unacceptably low when counter dimensions of the order of 15 cm diameter are considered and when overall compactness is an important design criterion. This work describes the design and optimization of an arrangement of field discs set at different potentials which enable sensitive volume to total volume ratios to approach unity. The method has been used to construct a 12.7 cm diameter right-cylindrical tissue-equivalent proportional counter in which the sensitive volume accounts for over 95% of the total device volume and the gas gain uniformity is maintained to within 3% along the entire length of the anode wire. PMID:1438550

  12. Large Surveys & Simulations -- The Real and the Virtual Universe

    NASA Astrophysics Data System (ADS)

    von Berlepsch, Regina

    2012-06-01

    The current issue of AN is Volume 24 of the Reviews in Modern Astronomy and presents selected papers given at the International Scientific Conference of the Society on ``Surveys & Simulations -- The Real and the Virtual Universe'' held in Heidelberg, Germany, September 19-23, 2011. The ``Astronomische Gesellschaft'' was actually founded in Heidelberg in 1863 and there has been a close connection between the AG and the colleagues in Heidelberg ever since. It was the sixth time that Heidelberg hosted a meeting of the AG. In 2011 the meeting took place at the Ruprecht-Karls-University, Germany's oldest university, celebrating its 625th anniversary. The meeting was attended by more than 400 participants from around the world.

  13. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  14. Exposing earth surface process model simulations to a large audience

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical

  15. Mechanically Cooled Large-Volume Germanium Detector Systems for Nuclear Explosion Monitoring DOENA27323-1

    SciTech Connect

    Hull, E.L.

    2006-07-28

    Compact maintenance free mechanical cooling systems are being developed to operate large volume germanium detectors for field applications. To accomplish this we are utilizing a newly available generation of Stirling-cycle mechanical coolers to operate the very largest volume germanium detectors with no maintenance. The user will be able to leave these systems unplugged on the shelf until needed. The flip of a switch will bring a system to life in ~ 1 hour for measurements. The maintenance-free operating lifetime of these detector systems will exceed 5 years. These features are necessary for remote long-duration liquid-nitrogen free deployment of large-volume germanium gamma-ray detector systems for Nuclear Explosion Monitoring. The Radionuclide Aerosol Sampler/Analyzer (RASA) will greatly benefit from the availability of such detectors by eliminating the need for liquid nitrogen at RASA sites while still allowing the very largest available germanium detectors to be reliably utilized.

  16. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  17. A system for the disposal of large volumes of air containing oxygen-15

    NASA Astrophysics Data System (ADS)

    Peters, J. M.; Quaglia, L.; del Fiore, G.; Hannay, J.; Fissore, A.

    1991-01-01

    A method is described which permits large volumes of air containing the radionuclide 15O to be vented into the atmosphere. The short half-life of this isotope (124 s) enables use to be made of a large number of small vessels connected in series. Such a device has the effect of increasing the mean transit time. The system as installed results in a reduction of the radioactive concentration in the vented air to levels below the maximum permitted values.

  18. Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modeling

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-02-01

    Measuring the volume weighted velocity power spectrum suffers from a severe systematic error due to imperfect sampling of the velocity field from the inhomogeneous distribution of dark matter particles/halos in simulations or galaxies with velocity measurement. This "sampling artifact" depends on both the mean particle number density n¯P and the intrinsic large scale structure (LSS) fluctuation in the particle distribution. (1) We report robust detection of this sampling artifact in N -body simulations. It causes ˜12 % underestimation of the velocity power spectrum at k =0.1 h /Mpc for samples with n¯ P=6 ×10-3 (Mpc /h )-3 . This systematic underestimation increases with decreasing n¯P and increasing k . Its dependence on the intrinsic LSS fluctuations is also robustly detected. (2) All of these findings are expected based upon our theoretical modeling in paper I [P. Zhang, Y. Zheng, and Y. Jing, Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling, arXiv:1405.7125.]. In particular, the leading order theoretical approximation agrees quantitatively well with the simulation result for n¯ P≳6 ×10-4 (Mpc /h )-3 . Furthermore, we provide an ansatz to take high order terms into account. It improves the model accuracy to ≲1 % at k ≲0.1 h /Mpc over 3 orders of magnitude in n¯P and over typical LSS clustering from z =0 to z =2 . (3) The sampling artifact is determined by the deflection D field, which is straightforwardly available in both simulations and data of galaxy velocity. Hence the sampling artifact in the velocity power spectrum measurement can be self-calibrated within our framework. By applying such self-calibration in simulations, it is promising to determine the real large scale velocity bias of 1013M⊙ halos with ˜1 % accuracy, and that of lower mass halos with better accuracy. (4) In contrast to suppressing the velocity power spectrum at large scale, the sampling artifact causes an overestimation of the velocity

  19. Perspective volume rendering of cross-sectional images for simulated endoscopy and intraparenchymal viewing

    NASA Astrophysics Data System (ADS)

    Napel, Sandy; Rubin, Geoffrey D.; Beaulieu, Christopher F.; Jeffrey, R. Brooke, Jr.; Argiro, Vincent

    1996-04-01

    The capability of today's clinical scanners to create large quantities of high resolution and near isotropically sampled volume data, coupled with a rapidly improving performance/price ratio of computers, has created the challenge and feasibility of creating new ways to explore cross- sectional medical imagery. Perspective volume rendering (PVR) allows an observer to 'fly- through' image data and view its contents from within for diagnostic and treatment planning purposes. We simulated flights through 14 data sets and, where possible, these were compared to conventional endoscopy. We demonstrated colonic masses and polyps as small as 5 mm, tracheal obstructions and precise positioning of endoluminal stent-grafts. Simulated endoscopy was capable of generating views not possible with conventional endoscopy due to its restrictions on camera location and orientation. Interactive adjustment of tissue opacities permitted views beyond the interior of lumina to reveal other structures such as masses, thrombus, and calcifications. We conclude that PVR is an exciting new technique with the potential to supplement and/or replace some conventional diagnostic imaging procedures. It has further utility for treatment planning and communication with colleagues, and the potential to reduce the number of normal people who would otherwise undergo more invasive procedures without benefit.

  20. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    SciTech Connect

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-07-28

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  1. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    PubMed Central

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-01-01

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys.141, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys.141, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent. PMID:25084887

  2. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-07-01

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2-3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  3. A New Electropositive Filter for Concentrating Enterovirus and Norovirus from Large Volumes of Water - MCEARD

    EPA Science Inventory

    The detection of enteric viruses in environmental water usually requires the concentration of viruses from large volumes of water. The 1MDS electropositive filter is commonly used for concentrating enteric viruses from water but unfortunately these filters are not cost-effective...

  4. Development of a Solid Phase Extraction Method for Agricultural Pesticides in Large-Volume Water Samples

    EPA Science Inventory

    An analytical method using solid phase extraction (SPE) and analysis by gas chromatography/mass spectrometry (GC/MS) was developed for the trace determination of a variety of agricultural pesticides and selected transformation products in large-volume high-elevation lake water sa...

  5. An efficient out-of-core volume ray casting method for the visualization of large medical data sets

    NASA Astrophysics Data System (ADS)

    Xue, Jian; Tian, Jie; Chen, Jian; Dai, Yakang

    2007-03-01

    Volume ray casting algorithm is widely recognized for high quality volume visualization. However, when rendering very large volume data sets, the original ray casting algorithm will lead to very inefficient random accesses in disk and make it very slowly to render the whole volume data set. In order to solve this problem, an efficient out-of-core volume ray casting method with a new out-of-core framework for processing large volume data sets based on consumer PC hardware is proposed in this paper. The new framework gives a transparent and efficient access to the volume data set cached in disk, while the new volume ray casting method minimizes the data exchange between hard disk and physical memory and performs comparatively fast high quality volume rendering. The experimental results indicate that the new method and framework are effective and efficient for the visualization of very large medical data sets.

  6. Hanford tank waste operation simulator operational waste volume projection verification and validation procedure

    SciTech Connect

    HARMSEN, R.W.

    1999-10-28

    The Hanford Tank Waste Operation Simulator is tested to determine if it can replace the FORTRAN-based Operational Waste Volume Projection computer simulation that has traditionally served to project double-shell tank utilization. Three Test Cases are used to compare the results of the two simulators; one incorporates the cleanup schedule of the Tri Party Agreement.

  7. Simulations of Large-Area Electron Beam Diodes

    NASA Astrophysics Data System (ADS)

    Swanekamp, S. B.; Friedman, M.; Ludeking, L.; Smithe, D.; Obenschain, S. P.

    1999-11-01

    Large area electron beam diodes are typically used to pump the amplifiers of KrF lasers. Simulations of large-area electron beam diodes using the particle-in-cell code MAGIC3D have shown the electron flow in the diode to be unstable. Since this instability can potentially produce a non-uniform current and energy distribution in the hibachi structure and lasing medium it can be detrimental to laser efficiency. These results are similar to simulations performed using the ISIS code.(M.E. Jones and V.A. Thomas, Proceedings of the 8^th) International Conference on High-Power Particle Beams, 665 (1990). We have identified the instability as the so called ``transit-time" instability(C.K. Birdsall and W.B. Bridges, Electrodynamics of Diode Regions), (Academic Press, New York, 1966).^,(T.M. Antonsen, W.H. Miner, E. Ott, and A.T. Drobot, Phys. Fluids 27), 1257 (1984). and have investigated the role of the applied magnetic field and diode geometry. Experiments are underway to characterize the instability on the Nike KrF laser system and will be compared to simulation. Also some possible ways to mitigate the instability will be presented.

  8. Large-scale Molecular Dynamics Simulations of Glancing Angle Deposition

    NASA Astrophysics Data System (ADS)

    Hubartt, Bradley; Liu, Xuejing; Amar, Jacques

    2013-03-01

    While a variety of methods have been developed to carry out atomistic simulations of thin-film growth at small deposition angles with respect to the substrate normal, due to the complex morphology as well as the existence of multiple scattering of depositing atoms by the growing thin-film, realistically modeling the deposition process for large deposition angles can be quite challenging. Accordingly, we have developed a computationally efficient method based on the use of a single graphical processing unit (GPU) to carry out molecular dynamics (MD) simulations of the deposition and growth of thin-films via glancing angle deposition. Using this method we have carried out large-scale MD simulations, based on an embedded-atom-method potential, of Cu/Cu(100) growth up to 20 monolayers for deposition angles ranging from 50° to 85° and for both random and fixed azimuthal angles. Our results for the thin-film porosity, roughness, lateral correlation length, and density vs height will be presented and compared with experiments. Results for the dependence of the microstructure, grain-size distribution, surface texture, and defect concentration on deposition angle will also be presented. Supported by NSF DMR-0907399

  9. Large-scale lattice-Boltzmann simulations over lambda networks

    NASA Astrophysics Data System (ADS)

    Saksena, R.; Coveney, P. V.; Pinning, R.; Booth, S.

    Amphiphilic molecules are of immense industrial importance, mainly due to their tendency to align at interfaces in a solution of immiscible species, e.g., oil and water, thereby reducing surface tension. Depending on the concentration of amphiphiles in the solution, they may assemble into a variety of morphologies, such as lamellae, micelles, sponge and cubic bicontinuous structures exhibiting non-trivial rheological properties. The main objective of this work is to study the rheological properties of very large, defect-containing gyroidal systems (of up to 10243 lattice sites) using the lattice-Boltzmann method. Memory requirements for the simulation of such large lattices exceed that available to us on most supercomputers and so we use MPICH-G2/MPIg to investigate geographically distributed domain decomposition simulations across HPCx in the UK and TeraGrid in the US. Use of MPICH-G2/MPIg requires the port-forwarder to work with the grid middleware on HPCx. Data from the simulations is streamed to a high performance visualisation resource at UCL (London) for rendering and visualisation. Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project March 26-28 2007 The George Hotel, Edinburgh, UK

  10. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  11. Large Eddy Simulation of Mixing within a Hypervelocity Scramjet Combustor

    NASA Astrophysics Data System (ADS)

    Petty, David; Wheatley, Vincent; Pantano, Carlos; Smart, Michael

    2013-11-01

    The turbulent mixing of parallel hypervelocity (U = 3230 m/sec, M = 3.86) air-streams with a sonic stream of gaseous hydrogen is simulated using large eddy simulation. The resultant mixing layers are characterized by a convective Mach number of 1.20. This configuration represents parallel slot injection of hydrogen via an intrusive centerbody within a constant area rectangular combustor. A hybrid shock-capturing/zero numerical dissipation (WENO/TCD) switch method designed for simulations of compressible turbulent flows was utilized. Sub-grid scale turbulence was modeled using the stretched vortex model. Visualizations of the three dimensional turbulent structures generated behind the centerbody will be presented. It has been observed that a span-wise instability of the wake behind the centerbody is initially dominant. Further downstream, the shear-layers coalesce into a mixing wake and develop the expected large-scale coherent span-wise vortices. Ph.D. Candidate, School of Mechanical and Mining Engineering, Centre for Hypersonics.

  12. Statistical Modeling of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T; Abdulla, G

    2002-02-22

    With the advent of fast computer systems, Scientists are now able to generate terabytes of simulation data. Unfortunately, the shear size of these data sets has made efficient exploration of them impossible. To aid scientists in gathering knowledge from their simulation data, we have developed an ad-hoc query infrastructure. Our system, called AQSim (short for Ad-hoc Queries for Simulation) reduces the data storage requirements and access times in two stages. First, it creates and stores mathematical and statistical models of the data. Second, it evaluates queries on the models of the data instead of on the entire data set. In this paper, we present two simple but highly effective statistical modeling techniques for simulation data. Our first modeling technique computes the true mean of systematic partitions of the data. It makes no assumptions about the distribution of the data and uses a variant of the root mean square error to evaluate a model. In our second statistical modeling technique, we use the Andersen-Darling goodness-of-fit method on systematic partitions of the data. This second method evaluates a model by how well it passes the normality test on the data. Both of our statistical models summarize the data so as to answer range queries in the most effective way. We calculate precision on an answer to a query by scaling the one-sided Chebyshev Inequalities with the original mesh's topology. Our experimental evaluations on two scientific simulation data sets illustrate the value of using these statistical modeling techniques on large simulation data sets.

  13. Large meteoroid's impact damage: review of available impact hazard simulators

    NASA Astrophysics Data System (ADS)

    Moreno-Ibáñez, M.; Gritsevich, M.; Trigo-Rodríguez, J. M.

    2016-01-01

    The damage caused by meter-sized meteoroids encountering the Earth is expected to be severe. Meteor-sized objects in heliocentric orbits can release energies higher than 108 J either in the upper atmosphere through an energetic airblast or, if reaching the surface, their impact may create a crater, provoke an earthquake or start up a tsunami. A limited variety of cases has been observed in the recent past (e.g. Tunguska, Carancas or Chelyabinsk). Hence, our knowledge has to be constrained with the help of theoretical studies and numerical simulations. There are several simulation programs which aim to forecast the impact consequences of such events. We have tested them using the recent case of the Chelyabinsk superbolide. Particularly, Chelyabinsk belongs to the ten to hundred meter-sized objects which constitute the main source of risk to Earth given the current difficulty in detecting them in advance. Furthermore, it was a detailed documented case, thus allowing us to properly check the accuracy of the studied simulators. As we present, these open simulators provide a first approximation of the impact consequences. However, all of them fail to accurately determine the caused damage. We explain the observed discrepancies between the observed and simulated consequences with the following consideration. The large amount of unknown properties of the potential impacting meteoroid, the atmospheric conditions, the flight dynamics and the uncertainty in the impact point itself hinder any modelling task. This difficulty can be partially overcome by reducing the number of unknowns using dimensional analysis and scaling laws. Despite the description of physical processes associated with atmospheric entry could be still further improved, we conclude that such approach would significantly improve the efficiency of the simulators.

  14. Evaluation of Bacillus oleronius as a Biological Indicator for Terminal Sterilization of Large-Volume Parenterals.

    PubMed

    Izumi, Masamitsu; Fujifuru, Masato; Okada, Aki; Takai, Katsuya; Takahashi, Kazuhiro; Udagawa, Takeshi; Miyake, Makoto; Naruyama, Shintaro; Tokuda, Hiroshi; Nishioka, Goro; Yoden, Hikaru; Aoki, Mitsuo

    2016-01-01

    In the production of large-volume parenterals in Japan, equipment and devices such as tanks, pipework, and filters used in production processes are exhaustively cleaned and sterilized, and the cleanliness of water for injection, drug materials, packaging materials, and manufacturing areas is well controlled. In this environment, the bioburden is relatively low, and less heat resistant compared with microorganisms frequently used as biological indicators such as Geobacillus stearothermophilus (ATCC 7953) and Bacillus subtilis 5230 (ATCC 35021). Consequently, the majority of large-volume parenteral solutions in Japan are manufactured under low-heat sterilization conditions of F0 <2 min, so that loss of clarity of solutions and formation of degradation products of constituents are minimized. Bacillus oleronius (ATCC 700005) is listed as a biological indicator in "Guidance on the Manufacture of Sterile Pharmaceutical Products Produced by Terminal Sterilization" (guidance in Japan, issued in 2012). In this study, we investigated whether B. oleronius is an appropriate biological indicator of the efficacy of low-heat, moist-heat sterilization of large-volume parenterals. Specifically, we investigated the spore-forming ability of this microorganism in various cultivation media and measured the D-values and z-values as parameters of heat resistance. The D-values and z-values changed depending on the constituents of large-volume parenteral products. Also, the spores from B. oleronius showed a moist-heat resistance that was similar to or greater than many of the spore-forming organisms isolated from Japanese parenteral manufacturing processes. Taken together, these results indicate that B. oleronius is suitable as a biological indicator for sterility assurance of large-volume parenteral solutions subjected to low-heat, moist-heat terminal sterilization. PMID:26889054

  15. Shuttle mission simulator baseline definition report, volume 1

    NASA Technical Reports Server (NTRS)

    Burke, J. F.; Small, D. E.

    1973-01-01

    A baseline definition of the space shuttle mission simulator is presented. The subjects discussed are: (1) physical arrangement of the complete simulator system in the appropriate facility, with a definition of the required facility modifications, (2) functional descriptions of all hardware units, including the operational features, data demands, and facility interfaces, (3) hardware features necessary to integrate the items into a baseline simulator system to include the rationale for selecting the chosen implementation, and (4) operating, maintenance, and configuration updating characteristics of the simulator hardware.

  16. Development of a hip joint model for finite volume simulations.

    PubMed

    Cardiff, P; Karač, A; FitzPatrick, D; Ivanković, A

    2014-01-01

    This paper establishes a procedure for numerical analysis of a hip joint using the finite volume method. Patient-specific hip joint geometry is segmented directly from computed tomography and magnetic resonance imaging datasets and the resulting bone surfaces are processed into a form suitable for volume meshing. A high resolution continuum tetrahedral mesh has been generated, where a sandwich model approach is adopted; the bones are represented as a stiffer cortical shells surrounding more flexible cancellous cores. Cartilage is included as a uniform thickness extruded layer and the effect of layer thickness is investigated. To realistically position the bones, gait analysis has been performed giving the 3D positions of the bones for the full gait cycle. Three phases of the gait cycle are examined using a finite volume based custom structural contact solver implemented in open-source software OpenFOAM. PMID:24141555

  17. Cryogenic Linear Ion Trap for Large-Scale Quantum Simulations

    NASA Astrophysics Data System (ADS)

    Pagano, Guido; Hess, Paul; Kaplan, Harvey; Birckelbaw, Eric; Hernanez, Micah; Lee, Aaron; Smith, Jake; Zhang, Jiehang; Monroe, Christopher

    2016-05-01

    Ions confined in RF Paul traps are a useful tool for quantum simulation of long-range spin-spin interaction models. As the system size increases, classical simulation methods become incapable of modeling the exponentially growing Hilbert space, necessitating quantum simulation for precise predictions. Current experiments are limited to less than 30 qubits due to collisions with background gas that regularly destroys the ion crystal. We present progress toward the construction of a cryogenic ion trap apparatus, which uses differential cryopumping to reduce vacuum pressure to a level where collisions do not occur. This should allow robust trapping of about 100 ions/qubits in a single chain with long lifetimes. Such a long chain will provide a platform to investigate simultaneously cooling of various vibrational modes and will enable quantum simulations that outperform their classical counterpart. Our apparatus will provide a powerful test-bed to investigate a large variety of Hamiltonians, including spin 1 and spin 1/2 systems with Ising or XY interactions. This work is supported by the ARO Atomic Physics Program, the AFOSR MURI on Quantum Measurement and Verification, the IC Fellowship Program and the NSF Physics Frontier Center at JQI.

  18. Pulsar Simulations for the Fermi Large Area Telescope

    NASA Technical Reports Server (NTRS)

    Razzano, M.; Harding, A. K.; Baldini, L.; Bellazzini, R.; Bregeon, J.; Burnett, T.; Chiang, J.; Digel, S. W.; Dubois, R.; Kuss, M. W.; Latronico, L.; McEnery, J. E.; Omodei, N.; Pesce-Rollins, M.; Sgro, C.; Spandre, G.; Thompson, D. J.

    2009-01-01

    Pulsars are among the prime targets for the Large Area Telescope (LAT) aboard the recently launched Fermi observatory. The LAT will study the gamma-ray Universe between 20 MeV and 300 GeV with unprecedented detail. Increasing numbers of gamma-ray pulsars are being firmly identified, yet their emission mechanisms are far from being understood. To better investigate and exploit the tAT capabilities for pulsar science. a set of new detailed pulsar simulation tools have been developed within the LAT collaboration. The structure of the pulsar simulator package (PulsarSpeccrum) is presented here. Starting from photon distributions in energy and phase obtained from theoretical calculations or phenomenological considerations, gamma-rays are generated and their arrival times at the spacecraft are determined by taking Into account effects such as barycentric effects and timing noise. Pulsars in binary systems also can be simulated given orbital parameters. We present how simulations can be used for generating a realistic set of gamma rays as observed by the LAT, focusing on some case studies that show the performance of the LAT for pulsar observations.

  19. Cryogenic Linear Ion Trap for Large-Scale Quantum Simulations

    NASA Astrophysics Data System (ADS)

    Kaplan, H. B.; Hess, P. W.; Pagano, G.; Birckelbaw, E. J.; Hernandez, M.; Lee, A. C.; Smith, J.; Zhang, J.; Monroe, C.

    2016-05-01

    Ions confined in RF Paul traps are a useful tool for quantum simulation of long-range spin-spin interaction models. As the system size increases, classical simulation methods become incapable of modeling the exponentially growing Hilbert space, necessitating quantum simulation for precise predictions. Current experiments are limited to less than 30 qubits due to collisions with background gas that regularly destroys the ion crystal. We present progress toward the construction of a cryogenic ion trap apparatus, which uses differential cryopumping to reduce vacuum pressure to a level where collisions do not occur. This should allow robust trapping of about 100 ions/qubits in a single chain with long lifetimes. Such a long chain will provide a platform to investigate simultaneously cooling of various vibrational modes and will enable quantum simulations that outperform their classical counterpart. Our apparatus will provide a powerful test-bed to investigate a large variety of Hamiltonians, including spin 1 and spin 1/2 systems with Ising or XY interactions. This work is supported by the ARO Atomic Physics Program, the AFOSR MURI on Quantum Measurement and Verification, and the NSF Physics Frontier Center at JQI.

  20. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  1. Large Eddy Simulation of a Cavitating Multiphase Flow for Liquid Injection

    NASA Astrophysics Data System (ADS)

    Cailloux, M.; Helie, J.; Reveillon, J.; Demoulin, F. X.

    2015-12-01

    This paper presents a numerical method for modelling a compressible multiphase flow that involves phase transition between liquid and vapour in the context of gasoline injection. A discontinuous compressible two fluid mixture based on the Volume of Fluid (VOF) implementation is employed to represent the phases of liquid, vapour and air. The mass transfer between phases is modelled by standard models such as Kunz or Schnerr-Sauer but including the presence of air in the gas phase. Turbulence is modelled using a Large Eddy Simulation (LES) approach to catch instationnarities and coherent structures. Eventually the modelling approach matches favourably experimental data concerning the effect of cavitation on atomisation process.

  2. A convex complementarity approach for simulating large granular flows.

    SciTech Connect

    Tasora, A.; Anitescu, M.; Mathematics and Computer Science; Univ. degli Studi di Parma

    2010-07-01

    Aiming at the simulation of dense granular flows, we propose and test a numerical method based on successive convex complementarity problems. This approach originates from a multibody description of the granular flow: all the particles are simulated as rigid bodies with arbitrary shapes and frictional contacts. Unlike the discrete element method (DEM), the proposed approach does not require small integration time steps typical of stiff particle interaction; this fact, together with the development of optimized algorithms that can run also on parallel computing architectures, allows an efficient application of the proposed methodology to granular flows with a large number of particles. We present an application to the analysis of the refueling flow in pebble-bed nuclear reactors. Extensive validation of our method against both DEM and physical experiments results indicates that essential collective characteristics of dense granular flow are accurately predicted.

  3. Large-eddy simulation of turbulent circular jet flows

    SciTech Connect

    Jones, S. C.; Sotiropoulos, F.; Sale, M. J.

    2002-07-01

    This report presents a numerical method for carrying out large-eddy simulations (LES) of turbulent free shear flows and an application of a method to simulate the flow generated by a nozzle discharging into a stagnant reservoir. The objective of the study was to elucidate the complex features of the instantaneous flow field to help interpret the results of recent biological experiments in which live fish were exposed to the jet shear zone. The fish-jet experiments were conducted at the Pacific Northwest National Laboratory (PNNL) under the auspices of the U.S. Department of Energy’s Advanced Hydropower Turbine Systems program. The experiments were designed to establish critical thresholds of shear and turbulence-induced loads to guide the development of innovative, fish-friendly hydropower turbine designs.

  4. Optical simulation of large aperture spatial heterodyne imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Cai, Qisheng; Xiangli, Bin; Fang, Yu

    2016-05-01

    Large aperture spatial heterodyne imaging spectrometer (LASHIS) is a new pushbroom Fourier transform ultraspectral imager with no moving parts. It is based on a Sagnac interferometer combined with a pair of gratings. In this paper, the basic principle of LASHIS is reviewed and an optical LASHIS model is set up in ZEMAX. Three interference images are presented, one is calculated according to the basic theory, one is simulated using the optical model in ZEMAX, and the other is generated by the experimental device set up in our laboratory. These three interference images show a good agreement with each other that demonstrate the correctness of the optical model. Using this model, we can simulate the interference image quickly. This image gives a visualized evaluation of the system performance, and it will be more convenient for system design or tolerance analysis of LASHIS.

  5. Implicit large eddy simulation of shock-driven material mixing.

    PubMed

    Grinstein, F F; Gowardhan, A A; Ristorcelli, J R

    2013-11-28

    Under-resolved computer simulations are typically unavoidable in practical turbulent flow applications exhibiting extreme geometrical complexity and a broad range of length and time scales. An important unsettled issue is whether filtered-out and subgrid spatial scales can significantly alter the evolution of resolved larger scales of motion and practical flow integral measures. Predictability issues in implicit large eddy simulation of under-resolved mixing of material scalars driven by under-resolved velocity fields and initial conditions are discussed in the context of shock-driven turbulent mixing. The particular focus is on effects of resolved spectral content and interfacial morphology of initial conditions on transitional and late-time turbulent mixing in the fundamental planar shock-tube configuration. PMID:24146010

  6. Large eddy simulation using the general circulation model ICON

    NASA Astrophysics Data System (ADS)

    Dipankar, Anurag; Stevens, Bjorn; Heinze, Rieke; Moseley, Christopher; Zängl, Günther; Giorgetta, Marco; Brdar, Slavko

    2015-09-01

    ICON (ICOsahedral Nonhydrostatic) is a unified modeling system for global numerical weather prediction (NWP) and climate studies. Validation of its dynamical core against a test suite for numerical weather forecasting has been recently published by Zängl et al. (2014). In the present work, an extension of ICON is presented that enables it to perform as a large eddy simulation (LES) model. The details of the implementation of the LES turbulence scheme in ICON are explained and test cases are performed to validate it against two standard LES models. Despite the limitations that ICON inherits from being a unified modeling system, it performs well in capturing the mean flow characteristics and the turbulent statistics of two simulated flow configurations—one being a dry convective boundary layer and the other a cumulus-topped planetary boundary layer.

  7. Coalescent simulation in continuous space: algorithms for large neighbourhood size.

    PubMed

    Kelleher, J; Etheridge, A M; Barton, N H

    2014-08-01

    Many species have an essentially continuous distribution in space, in which there are no natural divisions between randomly mating subpopulations. Yet, the standard approach to modelling these populations is to impose an arbitrary grid of demes, adjusting deme sizes and migration rates in an attempt to capture the important features of the population. Such indirect methods are required because of the failure of the classical models of isolation by distance, which have been shown to have major technical flaws. A recently introduced model of extinction and recolonisation in two dimensions solves these technical problems, and provides a rigorous technical foundation for the study of populations evolving in a spatial continuum. The coalescent process for this model is simply stated, but direct simulation is very inefficient for large neighbourhood sizes. We present efficient and exact algorithms to simulate this coalescent process for arbitrary sample sizes and numbers of loci, and analyse these algorithms in detail. PMID:24910324

  8. Immersive 4-D Interactive Visualization of Large-Scale Simulations

    NASA Astrophysics Data System (ADS)

    Teuben, P. J.; Hut, P.; Levy, S.; Makino, J.; McMillan, S.; Portegies Zwart, S.; Shara, M.; Emmart, C.

    In dense clusters a bewildering variety of interactions between stars can be observed, ranging from simple encounters to collisions and other mass-transfer encounters. With faster and special-purpose computers like GRAPE, the amount of data per simulation is now exceeding 1 TB. Visualization of such data has now become a complex 4-D data-mining problem, combining space and time, and finding interesting events in these large datasets. We have recently starting using the virtual reality simulator, installed in the Hayden Planetarium in the American Museum for Natural History, to tackle some of these problem. This work reports on our first ``observations,'' modifications needed for our specific experiments, and perhaps field ideas for other fields in science which can benefit from such immersion. We also discuss how our normal analysis programs can be interfaced with this kind of visualization.

  9. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  10. Understanding Subcutaneous Tissue Pressure for Engineering Injection Devices for Large-Volume Protein Delivery.

    PubMed

    Doughty, Diane V; Clawson, Corbin Z; Lambert, William; Subramony, J Anand

    2016-07-01

    Subcutaneous injection allows for self-administration of monoclonal antibodies using prefilled syringes, autoinjectors, and on-body injector devices. However, subcutaneous injections are typically limited to 1 mL due to concerns of injection pain from volume, viscosity, and formulation characteristics. Back pressure can serve as an indicator for changes in subcutaneous mechanical properties leading to pain during injection. The purpose of this study was to investigate subcutaneous pressures and injection site reactions as a function of injection volume and flow rate. A pressure sensor in the fluid path recorded subcutaneous pressures in the abdomen of Yorkshire swine. The subcutaneous tissue accommodates large-volume injections and with little back pressure as long as low flow rates are used. A 1 mL injection in 10 seconds (360 mL/h flow rate) generated a pressure of 24.0 ± 3.4 kPa, whereas 10 mL delivered in 10 minutes (60 mL/h flow rate) generated a pressure of 7.4 ± 7.8 kPa. After the injection, the pressure decays to 0 over several seconds. The subcutaneous pressures and mechanical strain increased with increasing flow rate but not increasing dose volume. These data are useful for the design of injection devices to mitigate back pressure and pain during subcutaneous large-volume injection. PMID:27287520

  11. Large Eddy Simulation of Vertical Axis Wind Turbine Wakes

    NASA Astrophysics Data System (ADS)

    Shamsoddin, Sina; Porté-Agel, Fernando

    2014-05-01

    In this study, large-eddy simulation (LES) is combined with a turbine model to investigate the wake behind a vertical-axis wind turbine (VAWT) in a three dimensional turbulent flow. Two methods are used to model the subgrid-scale (SGS) stresses: (a) the Smagorinsky model, and (b) the modulated gradient model. To parameterize the effects of the VAWT on the flow, two VAWT models are developed: (a) the actuator surface model (ASM), in which the time-averaged turbine-induced forces are distributed on a surface swept by the turbine blades, i.e. the actuator surface, and (b) the actuator line model (ALM), in which the instantaneous blade forces are only spatially distributed on lines representing the blades, i.e. the actuator lines. This is the first time that LES is applied and validated for simulation of VAWT wakes by using either the ASM or the ALM techniques. In both models, blade-element theory is used to calculate the lift and drag forces on the blades. The results are compared with flow measurements in the wake of a model straight-bladed VAWT, carried out in the Institute de Méchanique et Statistique de la Turbulence (IMST) water channel. Different combinations of SGS models with VAWT models are studied and a fairly good overall agreement between simulation results and measurement data is observed. In general, the ALM is found to better capture the unsteady-periodic nature of the wake and shows a better agreement with the experimental data compared with the ASM. The modulated gradient model is also found to be a more reliable SGS stress modeling technique, compared with the Smagorinsky model, and it yields reasonable predictions of the mean flow and turbulence characteristics of a VAWT wake using its theoretically-determined model coefficient. Keywords: Vertical-axis wind turbines (VAWTs); VAWT wake; Large-eddy simulation; Actuator surface model; Actuator line model; Smagorinsky model; Modulated gradient model

  12. Hydrothermal fluid flow and deformation in large calderas: Inferences from numerical simulations

    USGS Publications Warehouse

    Hurwitz, S.; Christiansen, L.B.; Hsieh, P.A.

    2007-01-01

    Inflation and deflation of large calderas is traditionally interpreted as being induced by volume change of a discrete source embedded in an elastic or viscoelastic half-space, though it has also been suggested that hydrothermal fluids may play a role. To test the latter hypothesis, we carry out numerical simulations of hydrothermal fluid flow and poroelastic deformation in calderas by coupling two numerical codes: (1) TOUGH2 [Pruess et al., 1999], which simulates flow in porous or fractured media, and (2) BIOT2 [Hsieh, 1996], which simulates fluid flow and deformation in a linearly elastic porous medium. In the simulations, high-temperature water (350??C) is injected at variable rates into a cylinder (radius 50 km, height 3-5 km). A sensitivity analysis indicates that small differences in the values of permeability and its anisotropy, the depth and rate of hydrothermal injection, and the values of the shear modulus may lead to significant variations in the magnitude, rate, and geometry of ground surface displacement, or uplift. Some of the simulated uplift rates are similar to observed uplift rates in large calderas, suggesting that the injection of aqueous fluids into the shallow crust may explain some of the deformation observed in calderas.

  13. Program to Optimize Simulated Trajectories (POST). Volume 1: Formulation manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    A general purpose FORTRAN program for simulating and optimizing point mass trajectories (POST) of aerospace vehicles is described. The equations and the numerical techniques used in the program are documented. Topics discussed include: coordinate systems, planet model, trajectory simulation, auxiliary calculations, and targeting and optimization.

  14. The terminal area simulation system. Volume 2: Verification cases

    NASA Technical Reports Server (NTRS)

    Proctor, F. H.

    1987-01-01

    The numerical simulation of five case studies are presented and are compared with available data in order to verify the three-dimensional version of the Terminal Area Simulation System (TASS). A spectrum of convective storm types are selected for the case studies. Included are: a High-Plains supercell hailstorm, a small and relatively short-lived High-Plains cumulonimbus, a convective storm which produced the 2 August 1985 DFW microburst, a South Florida convective complex, and a tornadic Oklahoma thunderstorm. For each of the cases the model results compared reasonably well with observed data. In the simulations of the supercell storms many of their characteristic features were modeled, such as the hook echo, BWER, mesocyclone, gust fronts, giant persistent updraft, wall cloud, flanking-line towers, anvil and radar reflectivity overhang, and rightward veering in the storm propagation. In the simulation of the tornadic storm a horseshoe-shaped updraft configuration and cyclic changes in storm intensity and structure were noted. The simulation of the DFW microburst agreed remarkably well with sparse observed data. The simulated outflow rapidly expanded in a nearly symmetrical pattern and was associated with a ringvortex. A South Florida convective complex was simulated and contained updrafts and downdrafts in the form of discrete bubbles. The numerical simulations, in all cases, always remained stable and bounded with no anomalous trends.

  15. The large volume radiometric calorimeter system: A transportable device to measure scrap category plutonium

    SciTech Connect

    Duff, M.F.; Wetzel, J.R.; Breakall, K.L.; Lemming, J.F.

    1987-01-01

    An innovative design concept has been used to design a large volume calorimeter system. The new design permits two measuring cells to fit in a compact, nonevaporative environmental bath. The system is mounted on a cart for transportability. Samples in the power range of 0.50 to 12.0 W can be measured. The calorimeters will receive samples as large as 22.0 cm in diameter by 43.2 cm high, and smaller samples can be measured without lengthening measurement time or increasing measurement error by using specially designed sleeve adapters. This paper describes the design considerations, construction, theory, applications, and performance of the large volume calorimeter system. 2 refs., 5 figs., 1 tab.

  16. Contrail Formation in Aircraft Wakes Using Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Paoli, R.; Helie, J.; Poinsot, T. J.; Ghosal, S.

    2002-01-01

    In this work we analyze the issue of the formation of condensation trails ("contrails") in the near-field of an aircraft wake. The basic configuration consists in an exhaust engine jet interacting with a wing-tip training vortex. The procedure adopted relies on a mixed Eulerian/Lagrangian two-phase flow approach; a simple micro-physics model for ice growth has been used to couple ice and vapor phases. Large eddy simulations have carried out at a realistic flight Reynolds number to evaluate the effects of turbulent mixing and wake vortex dynamics on ice-growth characteristics and vapor thermodynamic properties.

  17. Large perturbation flow field analysis and simulation for supersonic inlets

    NASA Technical Reports Server (NTRS)

    Varner, M. O.; Martindale, W. R.; Phares, W. J.; Kneile, K. R.; Adams, J. C., Jr.

    1984-01-01

    An analysis technique for simulation of supersonic mixed compression inlets with large flow field perturbations is presented. The approach is based upon a quasi-one-dimensional inviscid unsteady formulation which includes engineering models of unstart/restart, bleed, bypass, and geometry effects. Numerical solution of the governing time dependent equations of motion is accomplished through a shock capturing finite difference algorithm, of which five separate approaches are evaluated. Comparison with experimental supersonic wind tunnel data is presented to verify the present approach for a wide range of transient inlet flow conditions.

  18. Large-eddy simulation of a turbulent mixing layer

    NASA Technical Reports Server (NTRS)

    Mansour, N. N.; Ferziger, J. H.; Reynolds, W. C.

    1978-01-01

    The three dimensional, time dependent (incompressible) vorticity equations were used to simulate numerically the decay of isotropic box turbulence and time developing mixing layers. The vorticity equations were spatially filtered to define the large scale turbulence field, and the subgrid scale turbulence was modeled. A general method was developed to show numerical conservation of momentum, vorticity, and energy. The terms that arise from filtering the equations were treated (for both periodic boundary conditions and no stress boundary conditions) in a fast and accurate way by using fast Fourier transforms. Use of vorticity as the principal variable is shown to produce results equivalent to those obtained by use of the primitive variable equations.

  19. BASIC Simulation Programs; Volumes III and IV. Mathematics, Physics.

    ERIC Educational Resources Information Center

    Digital Equipment Corp., Maynard, MA.

    The computer programs presented here were developed as a part of the Huntington Computer Project. They were tested on a Digital Equipment Corporation TSS-8 time-shared computer and run in a version of BASIC. Mathematics and physics programs are presented in this volume. The 20 mathematics programs include ones which review multiplication skills;…

  20. Minimum-dissipation models for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Rozema, Wybe; Bae, Hyun J.; Moin, Parviz; Verstappen, Roel

    2015-08-01

    Minimum-dissipation eddy-viscosity models are a class of sub-filter models for large-eddy simulation that give the minimum eddy dissipation required to dissipate the energy of sub-filter scales. A previously derived minimum-dissipation model is the QR model. This model is based on the invariants of the resolved rate-of-strain tensor and has many desirable properties. It appropriately switches off for laminar and transitional flows, has low computational complexity, and is consistent with the exact sub-filter tensor on isotropic grids. However, the QR model proposed in the literature gives insufficient eddy dissipation. It is demonstrated that this can be corrected by increasing the model constant. The corrected QR model gives good results in simulations of decaying grid turbulence on an isotropic grid. On anisotropic grids the QR model is not consistent with the exact sub-filter tensor and requires an approximation of the filter width. It is demonstrated that the results of the QR model on anisotropic grids are primarily determined by the used filter width approximation, and that no approximation gives satisfactory results in simulations of both a temporal mixing layer and turbulent channel flow. A new minimum-dissipation model for anisotropic grids is proposed. This anisotropic minimum-dissipation (AMD) model generalizes the desirable practical and theoretical properties of the QR model to anisotropic grids and does not require an approximation of the filter width. The AMD model is successfully applied in simulations of decaying grid turbulence on an isotropic grid and in simulations of a temporal mixing layer and turbulent channel flow on anisotropic grids.

  1. Parallel finite element simulation of large ram-air parachutes

    NASA Astrophysics Data System (ADS)

    Kalro, V.; Aliabadi, S.; Garrard, W.; Tezduyar, T.; Mittal, S.; Stein, K.

    1997-06-01

    In the near future, large ram-air parachutes are expected to provide the capability of delivering 21 ton payloads from altitudes as high as 25,000 ft. In development and test and evaluation of these parachutes the size of the parachute needed and the deployment stages involved make high-performance computing (HPC) simulations a desirable alternative to costly airdrop tests. Although computational simulations based on realistic, 3D, time-dependent models will continue to be a major computational challenge, advanced finite element simulation techniques recently developed for this purpose and the execution of these techniques on HPC platforms are significant steps in the direction to meet this challenge. In this paper, two approaches for analysis of the inflation and gliding of ram-air parachutes are presented. In one of the approaches the point mass flight mechanics equations are solved with the time-varying drag and lift areas obtained from empirical data. This approach is limited to parachutes with similar configurations to those for which data are available. The other approach is 3D finite element computations based on the Navier-Stokes equations governing the airflow around the parachute canopy and Newtons law of motion governing the 3D dynamics of the canopy, with the forces acting on the canopy calculated from the simulated flow field. At the earlier stages of canopy inflation the parachute is modelled as an expanding box, whereas at the later stages, as it expands, the box transforms to a parafoil and glides. These finite element computations are carried out on the massively parallel supercomputers CRAY T3D and Thinking Machines CM-5, typically with millions of coupled, non-linear finite element equations solved simultaneously at every time step or pseudo-time step of the simulation.

  2. A Large Motion Suspension System for Simulation of Orbital Deployment

    NASA Technical Reports Server (NTRS)

    Straube, T. M.; Peterson, L. D.

    1994-01-01

    This paper describes the design and implementation of a vertical degree of freedom suspension system which provides a constant force off-load condition to counter gravity over large displacements. By accommodating motions up to one meter for structures weighing up to 100 pounds, the system is useful for experiments which simulate the on-orbit deployment of spacecraft components. A unique aspect of this system is the combination of a large stroke passive off-load device augmented by electromotive torque actuated force feedback. The active force feedback has the effect of reducing breakaway friction by an order of magnitude over the passive system alone. The paper describes the development of the suspension hardware and the feedback control algorithm. Experiments were performed to verify the suspensions system's ability to provide a gravity off-load as well as its effect on the modal characteristics of a test article.

  3. Large eddy simulation of turbulent channel flow: ILLIAC 4 calculation

    NASA Technical Reports Server (NTRS)

    Kim, J.; Moin, P.

    1979-01-01

    The three-dimensional time dependent equations of motion were numerically integrated for fully-developed turbulent channel flow. A large scale flow field was obtained directly from the solution of these equations, and small scale field motions were simulated through an eddy viscosity model. The calculations were carried out on the ILLIAC 4 computer. The computed flow patterns show that the wall layer consists of coherent structures of low speed and high speed streaks alternating in the spanwise direction. These structures were absent in the regions away from the wall. Hot spots, small localized regions of very large turbulent shear stress, were frequently observed. The profiles of the pressure velocity-gradient correlations show a significant transfer of energy from the normal to the spanwise component of turbulent kinetic energy in the immediate neighborhood of the wall ('the splatting effect').

  4. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.

    1990-01-01

    This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.

  5. Opto-electrical characterization and X-ray mapping of large-volume cadmium zinc telluride radiation detectors

    SciTech Connect

    Yang, G.; Bolotnikov, A.E.; Camarda, G.S.; Cui, Y.; Hossain, A.; Yao, H.W.; Kim, K.; and James, R.B.

    2009-04-13

    Large-volume cadmium zinc telluride (CZT) radiation detectors would greatly improve radiation detection capabilities and, therefore, attract extensive scientific and commercial interests. CZT crystals with volumes as large as hundreds of centimeters can be achieved today due to improvements in the crystal growth technology. However, the poor performance of large-volume CZT detectors is still a challenging problem affecting the commercialization of CZT detectors and imaging arrays. We have employed Pockels effect measurements and synchrotron X-ray mapping techniques to investigate the performance-limiting factors for large-volume CZT detectors. Experimental results with the above characterization methods reveal the non-uniform distribution of internal electric field of large-volume CZT detectors, which help us to better understand the responsible mechanism for the insufficient carrier collection in large-volume CZT detectors.

  6. WEST-3 wind turbine simulator development. Volume 2: Verification

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    The details of a study to validate WEST-3, a new time wind turbine simulator developed by Paragib Pacific Inc., are presented in this report. For the validation, the MOD-0 wind turbine was simulated on WEST-3. The simulation results were compared with those obtained from previous MOD-0 simulations, and with test data measured during MOD-0 operations. The study was successful in achieving the major objective of proving that WEST-3 yields results which can be used to support a wind turbine development process. The blade bending moments, peak and cyclic, from the WEST-3 simulation correlated reasonably well with the available MOD-0 data. The simulation was also able to predict the resonance phenomena observed during MOD-0 operations. Also presented in the report is a description and solution of a serious numerical instability problem encountered during the study. The problem was caused by the coupling of the rotor and the power train models. The results of the study indicate that some parts of the existing WEST-3 simulation model may have to be refined for future work; specifically, the aerodynamics and procedure used to couple the rotor model with the tower and the power train models.

  7. Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.; Bergman, R. O.; Bonstrom, D. B.; Brinkman, T. W.; Chiu, S. H. J.; Green, S. S.; Hansen, S. D.; Klein, D. L.; Krohn, H. E.; Prow, R. P.

    1979-01-01

    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation.

  8. Shuttle mission simulator requirements report, volume 1, revision C

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The contractor tasks required to produce a shuttle mission simulator for training crew members and ground personnel are discussed. The tasks will consist of the design, development, production, installation, checkout, and field support of a simulator with two separate crew stations. The tasks include the following: (1) review of spacecraft changes and incorporation of appropriate changes in simulator hardware and software design, and (2) the generation of documentation of design, configuration management, and training used by maintenance and instructor personnel after acceptance for each of the crew stations.

  9. A survey of electric and hybrid vehicles simulation programs. Volume 2: Questionnaire responses

    NASA Technical Reports Server (NTRS)

    Bevan, J.; Heimburger, D. A.; Metcalfe, M. A.

    1978-01-01

    The data received in a survey conducted within the United States to determine the extent of development and capabilities of automotive performance simulation programs suitable for electric and hybrid vehicle studies are presented. The survey was conducted for the Department of Energy by NASA's Jet Propulsion Laboratory. Volume 1 of this report summarizes and discusses the results contained in Volume 2.

  10. Shuttle mission simulator requirement report, volume 2, revision A

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The training requirements of all mission phases for crews and ground support personnel are presented. The specifications are given for the design and development of the simulator, data processing systems, engine control, software, and systems integration.

  11. Large eddy simulation of incompressible turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Moin, P.; Reynolds, W. C.; Ferziger, J. H.

    1978-01-01

    The three-dimensional, time-dependent primitive equations of motion were numerically integrated for the case of turbulent channel flow. A partially implicit numerical method was developed. An important feature of this scheme is that the equation of continuity is solved directly. The residual field motions were simulated through an eddy viscosity model, while the large-scale field was obtained directly from the solution of the governing equations. An important portion of the initial velocity field was obtained from the solution of the linearized Navier-Stokes equations. The pseudospectral method was used for numerical differentiation in the horizontal directions, and second-order finite-difference schemes were used in the direction normal to the walls. The large eddy simulation technique is capable of reproducing some of the important features of wall-bounded turbulent flows. The resolvable portions of the root-mean square wall pressure fluctuations, pressure velocity-gradient correlations, and velocity pressure-gradient correlations are documented.

  12. Simulating aperture masking at the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Stürmer, Julian; Quirrenbach, Andreas

    2012-07-01

    Preliminary investigations for an Aperture Masking Experiment at the Large Binocular Telescope (LBT) and its application to stellar surface imaging are presented. An algorithm is implemented which generates non redundant aperture masks for the LBT. These masks are adapted to the special geometrical conditions at the LBT. At the same time, they are optimized to provide a uniform UV-coverage. It is also possible to favor certain baselines to adapt the UV-coverage to observational requirements. The optimization is done by selecting appropriate masks among a large number (order 109) of randomized realizations of non-redundant (NR) masks. Using results of numerical simulations of the surface of red supergiants, interferometric data is generated as it would be available with these masks at the LBT while observing Betelgeuse. An image reconstruction algorithm is used to reconstruct images from Squared Visibility and Closure Phase data. It is shown that a number of about 15 holes per mask is sufficient to retrieve detailed images. Additionally, noise is added to the data in order to simulate the influence of measurement errors e.g. photon noise. Both the position and the shape of surface structures are hardly influenced by this noise. However, the flux of these details changes significantly.

  13. Large eddy simulation of flame flashback in a turbulent channel

    NASA Astrophysics Data System (ADS)

    Hassanaly, Malik; Lietz, Christopher; Raman, Venkat; Kolla, Hemanth; Chen, Jacqueline; Gruber, Andrea; Computational Flow Physics Group Team

    2014-11-01

    In high-hydrogen content gas turbines, the propagation of a premixed flame along with boundary layers on the combustor walls is a source of failure, whereby the flame could enter the fuel-air premixing region that is not designed to hold high-temperature fluid. In order to develop models for predicting this phenomenon, a large eddy simulation (LES) based study is carried out here. The flow configuration is based on a direct numerical simulation (DNS) of a turbulent channel, where an initial planar flame is allowed to propagate upstream in a non-periodic channel. The LES approach uses a flamelet-based combustion model along with standard models for the unresolved subfilter flux terms. It is found that the LES are very accurate in predicting the structure of the turbulent flame front. However, there was a large discrepancy for the transient evolution of the flame, indicating that the flame-boundary layer interaction modulates flame propagation significantly, and the near-wall flame behavior may be non-flamelet like due to the anisotropic of the flow in this region.

  14. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  15. Large eddy simulation of a pumped- storage reservoir

    NASA Astrophysics Data System (ADS)

    Launay, Marina; Leite Ribeiro, Marcelo; Roman, Federico; Armenio, Vincenzo

    2016-04-01

    The last decades have seen an increasing number of pumped-storage hydropower projects all over the world. Pumped-storage schemes move water between two reservoirs located at different elevations to store energy and to generate electricity following the electricity demand. Thus the reservoirs can be subject to important water level variations occurring at the daily scale. These new cycles leads to changes in the hydraulic behaviour of the reservoirs. Sediment dynamics and sediment budgets are modified, sometimes inducing problems of erosion and deposition within the reservoirs. With the development of computer performances, the use of numerical techniques has become popular for the study of environmental processes. Among numerical techniques, Large Eddy Simulation (LES) has arisen as an alternative tool for problems characterized by complex physics and geometries. This work uses the LES-COAST Code, a LES model under development in the framework of the Seditrans Project, for the simulation of an Upper Alpine Reservoir of a pumped-storage scheme. Simulations consider the filling (pump mode) and emptying (turbine mode) of the reservoir. The hydraulic results give a better understanding of the processes occurring within the reservoir. They are considered for an assessment of the sediment transport processes and of their consequences.

  16. Unsteady RANS and Large Eddy simulations of multiphase diesel injection

    NASA Astrophysics Data System (ADS)

    Philipp, Jenna; Green, Melissa; Akih-Kumgeh, Benjamin

    2015-11-01

    Unsteady Reynolds Averaged Navier-Stokes (URANS) and Large Eddy Simulations (LES) of two-phase flow and evaporation of high pressure diesel injection into a quiescent, high temperature environment is investigated. Unsteady RANS and LES are turbulent flow simulation approaches used to determine complex flow fields. The latter allows for more accurate predictions of complex phenomena such as turbulent mixing and physio-chemical processes associated with diesel combustion. In this work we investigate a high pressure diesel injection using the Euler-Lagrange method for multiphase flows as implemented in the Star-CCM+ CFD code. A dispersed liquid phase is represented by Lagrangian particles while the multi-component gas phase is solved using an Eulerian method. Results obtained from the two approaches are compared with respect to spray penetration depth and air entrainment. They are also compared with experimental data taken from the Sandia Engine Combustion Network for ``Spray A''. Characteristics of primary and secondary atomization are qualitatively evaluated for all simulation modes.

  17. Reduced-order simulation of large accelerator structuresa)

    NASA Astrophysics Data System (ADS)

    Cooke, S. J.

    2008-05-01

    Simulating electromagnetic waves inside finite periodic or almost periodic three-dimensional structures is important to research in linear particle acceleration, high power microwave generation, and photonic band gap structures. While eigenmodes of periodic structures can be determined from analysis of a single unit cell, based on Floquet theory, the general case of aperiodic structures, with defects or nonuniform properties, typically requires 3D electromagnetic simulation of the entire structure. When the structure is large and high accuracy is necessary this can require high-performance computing techniques to obtain even a few eigenmodes [Z. Li et al., Nucl. Instrum. Methods Phys. Res., Sect. A 558, 168 (2006)]. To confront this problem, we describe an efficient, field-based algorithm that can accurately determine the complete eigenmode spectrum for extended aperiodic structures, up to some chosen frequency limit. The new method combines domain decomposition with a nontraditional, dual eigenmode representation of the fields local to each cell of the structure. Two related boundary value eigenproblems are solved numerically in each cell, with (a) electrically shielded, and (b) magnetically shielded interfaces, to determine a combined set of basis fields. By using the dual solutions in our field representation we accurately represent both the electric and magnetic surface currents that mediate coupling at the interfaces between adjacent cells. The solution is uniformly convergent, so that typically only a few modes are used in each cell. We present results from 2D and 3D simulations that demonstrate the speed and low computational needs of the algorithm.

  18. Three regularization models as large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Graham, Jonathan; Holm, Darryl; Mininni, Pablo; Pouquet, Annick

    2006-11-01

    We test three regularizations, the α-model, Leray-α, and Clark-α, as sub-grid models for LES by comparison with a 1024^3 direction numerical simulation (DNS), Rλ 800, with a Taylor-Green forcing. Both the α-model and Clark-α are able to reproduce the large-scale anisotropy of the flow as well as the time scale of developing turbulence. Leray-α fails in both these regards. We study intermittency corrections through pdfs and the anomalous scaling of the velocity increment structure functions. Leray-α is somewhat less intermittent than the DNS and produces an energy spectrum that is too shallow in the inertial range, while Clark-α produces a broad k-5/3 spectrum and stronger intermittency corrections. Finally, the agreement of the DNS and α-model spectra, in disparity with results for lower Reynolds number simulations, is worse than in the Clark-α model. We conjecture that this enhanced intermittency in the α model is related to the steeper than k-5/3 spectrum now reported for the very highest Reynolds number simulations and atmospheric observations.

  19. Large-timestep mover for particle simulations of arbitrarilymagnetized species

    SciTech Connect

    Cohen, R.H.; Friedman, A.; Grote, D.P.; Vay, J-L.

    2007-03-26

    For self-consistent ion-beam simulations including electron motion, it is desirable to be able to follow electron dynamics accurately without being constrained by the electron cyclotron timescale. To this end, we have developed a particle-advance that interpolates between full particle dynamics and drift motion. By making a proper choice of interpolation parameter, simulation particles experience physically correct parallel dynamics, drift motion, and gyroradius when the timestep is large compared to the cyclotron period, though the effective gyro frequency is artificially low; in the opposite timestep limit, the method approaches a conventional Boris particle push. By combining this scheme with a Poisson solver that includes an interpolated form of the polarization drift in the dielectric response, the movers utility can be extended to higher-density problems where the plasma frequency of the species being advanced exceeds its cyclotron frequency. We describe a series of tests of the mover and its application to simulation of electron clouds in heavy-ion accelerators.

  20. Large Eddy Simulations of Colorless Distributed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Abdulrahman, Husam F.; Jaberi, Farhad; Gupta, Ashwani

    2014-11-01

    Development of efficient and low-emission colorless distributed combustion (CDC) systems for gas turbine applications require careful examination of the role of various flow and combustion parameters. Numerical simulations of CDC in a laboratory-scale combustor have been conducted to carefully examine the effects of these parameters on the CDC. The computational model is based on a hybrid modeling approach combining large eddy simulation (LES) with the filtered mass density function (FMDF) equations, solved with high order numerical methods and complex chemical kinetics. The simulated combustor operates based on the principle of high temperature air combustion (HiTAC) and has shown to significantly reduce the NOx, and CO emissions while improving the reaction pattern factor and stability without using any flame stabilizer and with low pressure drop and noise. The focus of the current work is to investigate the mixing of air and hydrocarbon fuels and the non-premixed and premixed reactions within the combustor by the LES/FMDF with the reduced chemical kinetic mechanisms for the same flow conditions and configurations investigated experimentally. The main goal is to develop better CDC with higher mixing and efficiency, ultra-low emission levels and optimum residence time. The computational results establish the consistency and the reliability of LES/FMDF and its Lagrangian-Eulerian numerical methodology.

  1. Computer simulation of reflective volume grating holographic data storage.

    PubMed

    Gombköt, Balázs; Koppa, Pál; Süt, Attila; L Rincz, Em Ke

    2007-07-01

    The shift selectivity of a reflective-type spherical reference wave volume hologram is investigated using a nonparaxial numerical modeling based on a multiple-thin-layer implementation of a volume integral equation. The method can be easily parallelized on multiple computers. According to the results, the falloff of the diffraction efficiency due to the readout shift shows neither Bragg zeros nor oscillation with our parameter set. This agrees with our earlier study of smaller and transmissive holograms. Interhologram cross talk of shift-multiplexed holograms is also modeled using the same method, together with sparse modulation block coding and correlation decoding of data. Signal-to-noise ratio and raw bit error rate values are calculated. PMID:17728833

  2. Pressure, relaxation volume, and elastic interactions in charged simulation cells

    NASA Astrophysics Data System (ADS)

    Bruneval, Fabien; Varvenne, Céline; Crocombette, Jean-Paul; Clouet, Emmanuel

    2015-01-01

    The ab initio calculation of charged supercells within density-functional theory is a necessary step to access several important properties of matter. The relaxation volume of charged point defects or the partial molar volume of ions in solution are two such examples. However, the total energy and therefore the pressure of charged systems is not uniquely defined when periodic boundary conditions are employed. This problem is tightly related to the origin of the electrostatic potential in periodic systems. This effect can be easily observed by modifying the electrostatic convention or modifying the local ionic potential details. We propose an approach to uniquely define the pressures in charged supercells with the use of the absolute deformation potentials. Only with such a definition could the ab initio calculations provide meaningful values for the relaxation volumes and for the elastic interactions for charged defects in semiconductors or ions in solution. The proposed scheme allows one to calculate sensible data even when charge neutrality is not enforced, thus going beyond the classical force-field-based approaches.

  3. Large-Eddy Simulation Code Developed for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2003-01-01

    A large-eddy simulation (LES) code was developed at the NASA Glenn Research Center to provide more accurate and detailed computational analyses of propulsion flow fields. The accuracy of current computational fluid dynamics (CFD) methods is limited primarily by their inability to properly account for the turbulent motion present in virtually all propulsion flows. Because the efficiency and performance of a propulsion system are highly dependent on the details of this turbulent motion, it is critical for CFD to accurately model it. The LES code promises to give new CFD simulations an advantage over older methods by directly computing the large turbulent eddies, to correctly predict their effect on a propulsion system. Turbulent motion is a random, unsteady process whose behavior is difficult to predict through computer simulations. Current methods are based on Reynolds-Averaged Navier- Stokes (RANS) analyses that rely on models to represent the effect of turbulence within a flow field. The quality of the results depends on the quality of the model and its applicability to the type of flow field being studied. LES promises to be more accurate because it drastically reduces the amount of modeling necessary. It is the logical step toward improving turbulent flow predictions. In LES, the large-scale dominant turbulent motion is computed directly, leaving only the less significant small turbulent scales to be modeled. As part of the prediction, the LES method generates detailed information on the turbulence itself, providing important information for other applications, such as aeroacoustics. The LES code developed at Glenn for propulsion flow fields is being used to both analyze propulsion system components and test improved LES algorithms (subgrid-scale models, filters, and numerical schemes). The code solves the compressible Favre-filtered Navier- Stokes equations using an explicit fourth-order accurate numerical scheme, it incorporates a compressible form of

  4. Constitutive modeling of large inelastic deformation of amorphous polymers: Free volume and shear transformation zone dynamics

    NASA Astrophysics Data System (ADS)

    Voyiadjis, George Z.; Samadi-Dooki, Aref

    2016-06-01

    Due to the lack of the long-range order in their molecular structure, amorphous polymers possess a considerable free volume content in their inter-molecular space. During finite deformation, these free volume holes serve as the potential sites for localized permanent plastic deformation inclusions which are called shear transformation zones (STZs). While the free volume content has been experimentally shown to increase during the course of plastic straining in glassy polymers, thermal analysis of stored energy due to the deformation shows that the STZ nucleation energy decreases at large plastic strains. The evolution of the free volume, and the STZs number density and nucleation energy during the finite straining are formulated in this paper in order to investigate the uniaxial post-yield softening-hardening behavior of the glassy polymers. This study shows that the reduction of the STZ nucleation energy, which is correlated with the free volume increase, brings about the post-yield primary softening of the amorphous polymers up to the steady-state strain value; and the secondary hardening is a result of the increased number density of the STZs, which is required for large plastic strains, while their nucleation energy is stabilized beyond the steady-state strain. The evolutions of the free volume content and STZ nucleation energy are also used to demonstrate the effect of the strain rate, temperature, and thermal history of the sample on its post-yield behavior. The obtained results from the model are compared with the experimental observations on poly(methyl methacrylate) which show a satisfactory consonance.

  5. Calibration of a new very large eddy simulation (VLES) methodology for turbulent flow simulation

    NASA Astrophysics Data System (ADS)

    Han, XingSi; Ye, TaoHong; Chen, YiLiang

    2012-10-01

    Following the idea of Speziale's Very Large Eddy Simulation (VLES) method, a new unified hybrid simulation approach was proposed which can change seamlessly from RANS (Reynolds-Averaged Navier-Stokes) to LES (Large Eddy Simulation) method depending on the numerical resolution. The model constants were calibrated in accordance with other hybrid methods. Besides being able to approach the two limits of RANS and LES, the new model also provides a proper VLES mode between the two limits, and thus can be used for a wide range of mesh resolutions. Also RANS simulation can be recovered near the wall which is similar to the Detached Eddy Simulation (DES) concept. This new methodology was implemented into Wilcox's k-ω model and applications were conducted for fully developed turbulent channel flow at Re τ = 395 and turbulent flow past a square cylinder at Re = 22000. Results were compared with LES predictions and other studies. The new method is found to be quite efficient in resolving large flow structures, and can predict satisfactory results on relative coarse mesh.

  6. Numerical simulation of a closed rotor-stator system using Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Amouyal, Solal Abraham Teva

    A large eddy simulation of an enclosed annular rotor stator cavity is presented. The geometry is characterized by a large aspect ratio G = (b-a)/h = 18.32 and a small radius ratio a/b = 0.152, where a and b are the inner and outer radii of the rotating disk and h is the interdisk spacing. The rotation rate o under consideration is equivalent to the rotational Reynolds number Re = o b2 /nu= 9.5x104 , where nu is the kinematic viscosity. The main objective of this study is to correctly simulate the rotor stator cavity using a low order numerical scheme on unstructured grids. The numerical simulations were run on the software AVBP developed by the Centre Europeen de Recherche et de Formation Avancee en Calcul Scientific. The results were compared to the experimental results obtained by Sebastien Poncet of Universit e Aix-Marseille. Two large eddy simulations techniques were used: the Smagorinsky and Wall-adapting local eddy-viscosity models. The simulations were run on three set of grids, each with a different cell resolution-14, 35 and 50- along the thickness of the system. Results from each mesh show a good qualitative agreement of the mean velocity field with Poncet's experimental results. It was found that the Samgorinsky model to be more appropriate for this configuration.

  7. Anatomically Detailed and Large-Scale Simulations Studying Synapse Loss and Synchrony Using NeuroBox.

    PubMed

    Breit, Markus; Stepniewski, Martin; Grein, Stephan; Gottmann, Pascal; Reinhardt, Lukas; Queisser, Gillian

    2016-01-01

    The morphology of neurons and networks plays an important role in processing electrical and biochemical signals. Based on neuronal reconstructions, which are becoming abundantly available through databases such as NeuroMorpho.org, numerical simulations of Hodgkin-Huxley-type equations, coupled to biochemical models, can be performed in order to systematically investigate the influence of cellular morphology and the connectivity pattern in networks on the underlying function. Development in the area of synthetic neural network generation and morphology reconstruction from microscopy data has brought forth the software tool NeuGen. Coupling this morphology data (either from databases, synthetic, or reconstruction) to the simulation platform UG 4 (which harbors a neuroscientific portfolio) and VRL-Studio, has brought forth the extendible toolbox NeuroBox. NeuroBox allows users to perform numerical simulations on hybrid-dimensional morphology representations. The code basis is designed in a modular way, such that e.g., new channel or synapse types can be added to the library. Workflows can be specified through scripts or through the VRL-Studio graphical workflow representation. Third-party tools, such as ImageJ, can be added to NeuroBox workflows. In this paper, NeuroBox is used to study the electrical and biochemical effects of synapse loss vs. synchrony in neurons, to investigate large morphology data sets within detailed biophysical simulations, and used to demonstrate the capability of utilizing high-performance computing infrastructure for large scale network simulations. Using new synapse distribution methods and Finite Volume based numerical solvers for compartment-type models, our results demonstrate how an increase in synaptic synchronization can compensate synapse loss at the electrical and calcium level, and how detailed neuronal morphology can be integrated in large-scale network simulations. PMID:26903818

  8. Anatomically Detailed and Large-Scale Simulations Studying Synapse Loss and Synchrony Using NeuroBox

    PubMed Central

    Breit, Markus; Stepniewski, Martin; Grein, Stephan; Gottmann, Pascal; Reinhardt, Lukas; Queisser, Gillian

    2016-01-01

    The morphology of neurons and networks plays an important role in processing electrical and biochemical signals. Based on neuronal reconstructions, which are becoming abundantly available through databases such as NeuroMorpho.org, numerical simulations of Hodgkin-Huxley-type equations, coupled to biochemical models, can be performed in order to systematically investigate the influence of cellular morphology and the connectivity pattern in networks on the underlying function. Development in the area of synthetic neural network generation and morphology reconstruction from microscopy data has brought forth the software tool NeuGen. Coupling this morphology data (either from databases, synthetic, or reconstruction) to the simulation platform UG 4 (which harbors a neuroscientific portfolio) and VRL-Studio, has brought forth the extendible toolbox NeuroBox. NeuroBox allows users to perform numerical simulations on hybrid-dimensional morphology representations. The code basis is designed in a modular way, such that e.g., new channel or synapse types can be added to the library. Workflows can be specified through scripts or through the VRL-Studio graphical workflow representation. Third-party tools, such as ImageJ, can be added to NeuroBox workflows. In this paper, NeuroBox is used to study the electrical and biochemical effects of synapse loss vs. synchrony in neurons, to investigate large morphology data sets within detailed biophysical simulations, and used to demonstrate the capability of utilizing high-performance computing infrastructure for large scale network simulations. Using new synapse distribution methods and Finite Volume based numerical solvers for compartment-type models, our results demonstrate how an increase in synaptic synchronization can compensate synapse loss at the electrical and calcium level, and how detailed neuronal morphology can be integrated in large-scale network simulations. PMID:26903818

  9. Lossless compression of very large volume data with fast dynamic access

    NASA Astrophysics Data System (ADS)

    Zhao, Rongkai; Tao, Tao; Gabriel, Michael; Belford, Geneva

    2002-09-01

    The volumetric data set is important in many scientific and biomedical fields. Since such sets may be extremely large, a compression method is critical to store and transmit them. To achieve a high compression rate, most of the existing volume compression methods are lossy, which is usually unacceptable in biomedical applications. We developed a new context-based non-linear prediction method to preprocess the volume data set in order to effectively lower the prediction entropy. The prediction error is further encoded using Huffman code. Unlike the conventional methods, the volume is divided into cubical blocks to take advantage of the data's spatial locality. Instead of building one Huffman tree for each block, we developed a novel binning algorithm that build a Huffman tree for each group (bin) of blocks. Combining all the effects above, we achieved an excellent compression rate compared to other lossless volume compression methods. In addition, an auxiliary data structure, Scalable Hyperspace File (SHSF) is used to index the huge volume so that we can obtain many other benefits including parallel construction, on-the-fly accessing of compressed data without global decompression, fast previewing, efficient background compressing, and scalability etc.

  10. Large-eddy simulation of very-large-scale motions in atmospheric boundary-layer flows

    NASA Astrophysics Data System (ADS)

    Fang, Jiannong; Porté-Agel, Fernando

    2015-04-01

    In the last few decades, laboratory experiments and direct numerical simulations of turbulent boundary layers, performed at low to moderate Reynolds numbers, have found very-large-scale motions (VLSMs) in the logarithmic and outer regions. The size of VLSMs was found to be 10-20 times as large as the boundary-layer thickness. Recently, few studies based on field experiments examined the presence of VLSMs in neutral atmospheric boundary-layer flows, which are invariably at very high Reynolds numbers. Very large scale structures similar to those observed in laboratory-scale experiments have been found and characterized. However, it is known that field measurements are more challenging than laboratory-based measurements, and can lack resolution and statistical convergence. Such challenges have implications on the robustness of the analysis, which may be further adversely affected by the use of Taylor's hypothesis to convert time series to spatial data. We use large-eddy simulation (LES) to investigate VLSMs in atmospheric boundary-layer flows. In order to make sure that the largest flow structures are properly resolved, the horizontal domain size is chosen to be much larger than the standard domain size. It is shown that the contributions to the resolved turbulent kinetic energy and shear stress from VLSMs are significant. Therefore, the large computational domain adopted here is essential for the purpose of investigating VLSMs. The spatially coherent structures associated with VLSMs are characterized through flow visualization and statistical analysis. The instantaneous velocity fields in horizontal planes give evidence of streamwise-elongated flow structures of low-speed fluid with negative fluctuation of the streamwise velocity component, and which are flanked on either side by similarly elongated high-speed structures. The pre-multiplied power spectra and two-point correlations indicate that the scales of these streak-like structures are very large. These features

  11. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  12. HYBRID BRIDGMAN ANVIL DESIGN: AN OPTICAL WINDOW FOR IN-SITU SPECTROSCOPY IN LARGE VOLUME PRESSES

    SciTech Connect

    Lipp, M J; Evans, W J; Yoo, C S

    2005-07-29

    The absence of in-situ optical probes for large volume presses often limits their application to high-pressure materials research. In this paper, we present a unique anvil/optical window-design for use in large volume presses, which consists of an inverted diamond anvil seated in a Bridgman type anvil. A small cylindrical aperture through the Bridgman anvil ending at the back of diamond anvil allows optical access to the sample chamber and permits direct optical spectroscopy measurements, such as ruby fluorescence (in-situ pressure) or Raman spectroscopy. This performance of this anvil-design has been demonstrated by loading KBr to a pressure of 14.5 GPa.

  13. Assembly, operation and disassembly manual for the Battelle Large Volume Water Sampler (BLVWS)

    SciTech Connect

    Thomas, V.W.; Campbell, R.M.

    1984-12-01

    Assembly, operation and disassembly of the Battelle Large Volume Water Sampler (BLVWS) are described in detail. Step by step instructions of assembly, general operation and disassembly are provided to allow an operator completely unfamiliar with the sampler to successfully apply the BLVWS to his research sampling needs. The sampler permits concentration of both particulate and dissolved radionuclides from large volumes of ocean and fresh water. The water sample passes through a filtration section for particle removal then through sorption or ion exchange beds where species of interest are removed. The sampler components which contact the water being sampled are constructed of polyvinylchloride (PVC). The sampler has been successfully applied to many sampling needs over the past fifteen years. 9 references, 8 figures.

  14. The large volume calorimeter for measuring the pressure cooker'' shipping container

    SciTech Connect

    Kasperski, P.W.; Duff, M.F.; Wetzel, J.R. ); Baker, L.B.; MacMurdo, K.W. )

    1991-01-01

    A precise, low wattage, large volume calorimeter system has been developed at Mound to measure two configurations of the 12081 containment vessel. This system was developed and constructed to perform verification measurements at the Savannah River Site. The calorimeter system has performance design specifications of {plus minus}0.3% error above the 2-watt level, and {plus minus}(0.03% plus 0.006 watts) at power levels below 2 watts (one sigma). Data collected during performance testing shows measurement errors well within this range, even down to 0.1-watt power levels. The development of this calorimeter shows that ultra-precise measurements can be achieved on extremely large volume sample configurations. 1 ref., 5 figs.

  15. Nesting Large-Eddy Simulations Within Mesoscale Simulations for Wind Energy Applications

    NASA Astrophysics Data System (ADS)

    Lundquist, J. K.; Mirocha, J. D.; Chow, F. K.; Kosovic, B.; Lundquist, K. A.

    2008-12-01

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES) account for complex terrain and resolve individual atmospheric eddies on length scales smaller than turbine blades. These small-domain high-resolution simulations are possible with a range of commercial and open- source software, including the Weather Research and Forecasting (WRF) model. In addition to "local" sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecating model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosoviæ (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  16. Angular dependent potential for α-boron and large-scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Pokatashkin, P.; Kuksin, A.; Yanilkin, A.

    2015-06-01

    Both quantum mechanical and molecular-dynamics (MD) simulations of α-boron are done at this work. Angular dependent interatomic potential (ADP) for boron is obtained using force-matching technique. Fitting data are based on ab initio results within  -20..100 GPa pressure range and temperatures up to 2000 K. Characteristics of α-boron, obtained using ADP potential such as bond lengths at equilibrium condition, bulk modulus, pressure-volume relations, Gruneisen coefficient, thermal expansion coefficient are in good agreement with both ab initio data, obtained in this work and known experimental data. As an example of application, the propagation of shock waves through a single crystal of α-boron is also explored by large-scale MD simulations.

  17. Numerical simulation of the process of airfoil icing in the presence of large supercooled water drops

    NASA Astrophysics Data System (ADS)

    Prykhodko, O. A.; Alekseyenko, S. V.

    2014-10-01

    We have developed a software package and related methodology that can be used to simulate the process of airfoil icing during flight in the presence of large supercooled liquid water drops in the oncoming airflow. The motion of a carrier medium is described using the Navier-Stokes equations for a compressible gas. The motion of water drops is described using an inertial model. The process of water deposition and its subsequent freezing on an airfoil surface are described by the method of control volumes based on the equations of conservation of mass, momentum, and energy for each element of the surface. The main results of simulations are presented for the icing of an NACA 0012 airfoil profile with "barrier" ice formation in the absence and presence of heating of the leading edge. The influence of the ice-growth thickness and position on the airfoil chord on the pattern of airflow and aerodynamic characteristics of airfoil is analyzed.

  18. Anti-de Sitter-space/conformal-field-theory correspondence and large-N volume independence

    SciTech Connect

    Poppitz, Erich; Uensal, Mithat

    2010-09-15

    We study the Eguchi-Kawai reduction in the strong-coupling domain of gauge theories via the gravity dual of N=4 super-Yang-Mills on R{sup 3}xS{sup 1}. We show that D-branes geometrize volume independence in the center-symmetric vacuum and give supergravity predictions for the range of validity of reduced large-N models at strong coupling.

  19. Rapid Adaptive Optical Recovery of Optimal Resolution over LargeVolumes

    PubMed Central

    Wang, Kai; Milkie, Dan; Saxena, Ankur; Engerer, Peter; Misgeld, Thomas; Bronner, Marianne E.; Mumm, Jeff; Betzig, Eric

    2014-01-01

    Using a de-scanned, laser-induced guide star and direct wavefront sensing, we demonstrate adaptive correction of complex optical aberrations at high numerical aperture and a 14 ms update rate. This permits us to compensate for the rapid spatial variation in aberration often encountered in biological specimens, and recover diffraction-limited imaging over large (> 240 μm)3 volumes. We applied this to image fine neuronal processes and subcellular dynamics within the zebrafish brain. PMID:24727653

  20. Large Eddy Simulation of Flow and Sediment Transport over Dunes

    NASA Astrophysics Data System (ADS)

    Agegnehu, G.; Smith, H. D.

    2012-12-01

    Understanding the nature of flow over bedforms has a great importance in fluvial and coastal environments. For example, a bedform is one source of energy dissipation in water waves outside the surf zone in coastal environments. In rivers, the migration of dunes often affects the stability of the river bed and banks. In general, when a fluid flows over a sediment bed, the sediment transport generated by the interaction of the flow field with the bed results in the periodic deformation of the bed in the form of dunes. Dunes generally reach an equilibrium shape, and slowly propagate in the direction of the flow, as sand is lifted in the high shear regions, and redeposited in the separated flow areas. Different numerical approaches have been used in the past to study the flow and sediment transport over bedforms. In most research works, Reynolds Averaged Navier Stokes (RANS) equations are employed to study fluid motions over ripples and dunes. However, evidences suggests that these models can not represent key turbulent quantities in unsteady boundary layers. The use of Large Eddy Simulation (LES) can resolve a much larger range of smaller scales than RANS. Moreover, unsteady simulations using LES give vital turbulent quantities which can help to study fluid motion and sediment transport over dunes. For this steady, we use a three-dimensional, non-hydrostatic model, OpenFOAM. It is a freely available tool which has different solvers to simulate specific problems in engineering and fluid mechanics. Our objective is to examine the flow and sediment transport from numerical stand point for bed geometries that are typical of fixed dunes. At the first step, we performed Large Eddy Simulation of the flow over dune geometries based on the experimental data of Nelson et al. (1993). The instantaneous flow field is investigated with special emphasis on the occurrence of coherent structures. To assess the effect of bed geometries on near bed turbulence, we considered different

  1. Thermobaric cabbeling over Maud Rise: Theory and large eddy simulation

    NASA Astrophysics Data System (ADS)

    Harcourt, Ramsey R.

    2005-10-01

    A Large Eddy Simulation (LES) of the wintertime upper ocean below seasonal Antarctic ice cover over Maud Rise was carried out using observed time-dependent surface forcing from 1994 Antarctic Zone Flux Experiment (ANZFLUX) observations. Surface ice formation increases the density of the cold, fresher Surface Mixed Layer (SML), that overlies warmer, saltier Weddell Deep Water (WDW). This reduces the stability of the thermocline until it reaches a critical point for instabilities arising from the nonlinear equation of state (NES) for seawater density ρ. This simulation was intended to model the thermobaric detrainment of SML fluid, a NES instability predicted to result from the dependence of seawater density on the product θP of temperature and pressure. Instead, model results demonstrate a different instability arising from the combination of thermobaricity with cabbeling, the NES effect due primarily to the dependence of ρ on θ2. This combined thermobaric cabbeling instability drives turbulent convection in a deep interior mixed layer (IML) that may grow hundreds of meters thick below the thermocline, largely decoupled from SML dynamics. In the LES, thermobaric cabbeling and IML convection shoals the SML through entrainment from below until ice motion increases in the observationally-based model forcing. Increased upper ocean model heat flux due to higher ice speed melts surface ice, increasing thermocline stratification and eventually bringing the simulated instability to a halt. In an auxiliary simulation the lull preceding strong ice motion in field observations is artificially extended by temporarily holding model surface forcing constant until the SML shoals entirely, bringing the modified WDW of the IML, 2 °C above freezing, directly to the surface. Subsequently, reverting to the observed surface forcing and its attendant strong ice motion melts the ice cover entirely, demonstrating a possible mechanism for open ocean Antarctic polynya formation. The

  2. Characterization of stable brush-shaped large-volume plasma generated at ambient air

    SciTech Connect

    Tang Jie; Cao Wenqing; Zhao Wei; Wang Yishan; Duan Yixiang

    2012-01-15

    A brush-shaped, large-volume plasma was generated at ambient pressure with a dc power supply and flowing argon gas, as well as a narrow outlet slit. Based on the V-I curve and emission profiles obtained in our experiment, the plasma shows some typical glow discharge characteristics. The electron density in the positive column close to the anode is about 1.4x10{sup 14}cm{sup -3} high, which is desirable for generating abundant amounts of reactive species in the plasma. Emission spectroscopy diagnosis indicates that many reactive species, such as excited argon atoms, excited oxygen atoms, excited nitrogen molecules, OH and C{sub 2} radicals, etc., generated within the plasma are distributed symmetrically and uniformly, which is preferable to some chemical reactions in practical applications. Spectral measurement also shows that the concentration of some excited argon atoms increases with the argon flow rate when the applied voltage is unvaried, while that of these excited argon atoms declines with the discharge current in the normal/subnormal glow discharge mode with the argon flow rate fixed. The plasma size is about 15 mm x 1 mm x 19 mm (L, W, H), when 38-W of discharge power is used. Such a laminar brush-shaped large-volume plasma device ensures not only efficient utilization of the plasma gas, but also effective processing of objects with large volume and complicated structure that are susceptible to high temperatures.

  3. RSRM top hat cover simulator lightning test, volume 1

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The test sequence was to measure electric and magnetic fields induced inside a redesigned solid rocket motor case when a simulated lightning discharge strikes an exposed top hat cover simulator. The test sequence was conducted between 21 June and 17 July 1990. Thirty-six high rate-of-rise Marx generator discharges and eight high current bank discharges were injected onto three different test article configurations. Attach points included three locations on the top hat cover simulator and two locations on the mounting bolts. Top hat cover simulator and mounting bolt damage and grain cover damage was observed. Overall electric field levels were well below 30 kilowatts/meter. Electric field levels ranged from 184.7 to 345.9 volts/meter and magnetic field levels were calculated from 6.921 to 39.73 amperes/meter. It is recommended that the redesigned solid rocket motor top hat cover be used in Configuration 1 or Configuration 2 as an interim lightning protection device until a lightweight cover can be designed.

  4. Analytical simulation of SPS system performance, volume 3, phase 3

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.; Lindsey, W. C.

    1980-01-01

    The simulation model for the Solar Power Satellite spaceantenna and the associated system imperfections are described. Overall power transfer efficiency, the key performance issue, is discussed as a function of the system imperfections. Other system performance measures discussed include average power pattern, mean beam gain reduction, and pointing error.

  5. Shuttle mission simulator baseline definition report, volume 2

    NASA Technical Reports Server (NTRS)

    Dahlberg, A. W.; Small, D. E.

    1973-01-01

    The baseline definition report for the space shuttle mission simulator is presented. The subjects discussed are: (1) the general configurations, (2) motion base crew station, (3) instructor operator station complex, (4) display devices, (5) electromagnetic compatibility, (6) external interface equipment, (7) data conversion equipment, (8) fixed base crew station equipment, and (9) computer complex. Block diagrams of the supporting subsystems are provided.

  6. Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual

    NASA Technical Reports Server (NTRS)

    Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.

  7. Program to Optimize Simulated Trajectories (POST). Volume 3: Programmer's manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to the programmer and relating to the program to optimize simulated trajectories (POST) is presented. Topics discussed include: program structure and logic, subroutine listings and flow charts, and internal FORTRAN symbols. The POST core requirements are summarized along with program macrologic.

  8. Shuttle mission simulator requirements report, volume 1, revision A

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The tasks are defined required to design, develop produce, and field support a shuttle mission simulator for training crew members and ground support personnel. The requirements for program management, control, systems engineering, design and development are discussed along with the design and construction standards, software design, control and display, communication and tracking, and systems integration.

  9. Large-eddy simulation of turbulence in steam generators

    SciTech Connect

    Bagwell, T.G.; Hassan, Y.A. ); Steininger D.A. )

    1989-11-01

    A major problem associated with steam generators is excessive tube vibration caused by turbulent-flow buffeting and fluid-elastic excitation. Vibration can lead to tube rupture or wear, necessitating tube plugging and reducing the availability of the steam generator. The fluid/structure interaction phenomenon that causes fluid-elastic tube excitation is unknown at present. The current investigation defines the spectral characteristics of turbulent flow entering the Westinghouse D4 steam generator tube bundles using the large-eddy simulation (LES) technique. Due to the recent availability of supercomputers, LES is being considered as a possible engineering design analysis tool. The information from this study will provide input for defining the temporally fluctuating forces on steam generator tube banks. The GUST code was used to analyze the water box of a Westinghouse model D4 steam generator.

  10. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  11. Large-eddy simulation of cavitating nozzle and jet flows

    NASA Astrophysics Data System (ADS)

    Örley, F.; Trummler, T.; Hickel, S.; Mihatsch, M. S.; Schmidt, S. J.; Adams, N. A.

    2015-12-01

    We present implicit large-eddy simulations (LES) to study the primary breakup of cavitating liquid jets. The considered configuration, which consists of a rectangular nozzle geometry, adopts the setup of a reference experiment for validation. The setup is a generic reproduction of a scaled-up automotive fuel injector. Modelling of all components (i.e. gas, liquid, and vapor) is based on a barotropic two-fluid two-phase model and employs a homogenous mixture approach. The cavitating liquid model assumes thermodynamic- equilibrium. Compressibility of all phases is considered in order to capture pressure wave dynamics of collapse events. Since development of cavitation significantly affects jet break-up characteristics, we study three different operating points. We identify three main mechanisms which induce primary jet break-up: amplification of turbulent fluctuations, gas entrainment, and collapse events near the liquid-gas interface.

  12. Hyperbolic self-gravity solver for large scale hydrodynamical simulations

    NASA Astrophysics Data System (ADS)

    Hirai, Ryosuke; Nagakura, Hiroki; Okawa, Hirotada; Fujisawa, Kotaro

    2016-04-01

    A new computationally efficient method has been introduced to treat self-gravity in Eulerian hydrodynamical simulations. It is applied simply by modifying the Poisson equation into an inhomogeneous wave equation. This roughly corresponds to the weak field limit of the Einstein equations in general relativity, and as long as the gravitation propagation speed is taken to be larger than the hydrodynamical characteristic speed, the results agree with solutions for the Poisson equation. The solutions almost perfectly agree if the domain is taken large enough, or appropriate boundary conditions are given. Our new method cannot only significantly reduce the computational time compared with existent methods, but is also fully compatible with massive parallel computation, nested grids, and adaptive mesh refinement techniques, all of which can accelerate the progress in computational astrophysics and cosmology.

  13. Large Eddy Simulation of FDA's Idealized Medical Device.

    PubMed

    Delorme, Yann T; Anupindi, Kameswararao; Frankel, Steven H

    2013-12-01

    A hybrid large eddy simulation (LES) and immersed boundary method (IBM) computational approach is used to make quantitative predictions of flow field statistics within the Food and Drug Administration's (FDA) idealized medical device. An in-house code is used, hereafter (W enoHemo(™) ), that combines high-order finite-difference schemes on structured staggered Cartesian grids with an IBM to facilitate flow over or through complex stationary or rotating geometries and employs a subgrid-scale (SGS) turbulence model that more naturally handles transitional flows [2]. Predictions of velocity and wall shear stress statistics are compared with previously published experimental measurements from Hariharan et al. [6] for the four Reynolds numbers considered. PMID:24187599

  14. Large Eddy Simulation in a Channel with Exit Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Cziesla, T.; Braun, H.; Biswas, G.; Mitra, N. K.

    1996-01-01

    The influence of the exit boundary conditions (vanishing first derivative of the velocity components and constant pressure) on the large eddy simulation of the fully developed turbulent channel flow has been investigated for equidistant and stretched grids at the channel exit. Results show that the chosen exit boundary conditions introduce some small disturbance which is mostly damped by the grid stretching. The difference between the fully developed turbulent channel flow obtained with LES with periodicity condition and the inlet and exit and the LES with fully developed flow at the inlet and the exit boundary condition is less than 10% for equidistant grids and less than 5% for the case grid stretching. The chosen boundary condition is of interest because it may be used in complex flows with backflow at exit.

  15. Background simulations for the Large Area Detector onboard LOFT

    NASA Astrophysics Data System (ADS)

    Campana, Riccardo; Feroci, Marco; Del Monte, Ettore; Mineo, Teresa; Lund, Niels; Fraser, George W.

    2013-12-01

    The Large Observatory For X-ray Timing (LOFT), currently in an assessment phase in the framework the ESA M3 Cosmic Vision programme, is an innovative medium-class mission specifically designed to answer fundamental questions about the behaviour of matter, in the very strong gravitational and magnetic fields around compact objects and in supranuclear density conditions. Having an effective area of ˜10 m2 at 8 keV, LOFT will be able to measure with high sensitivity very fast variability in the X-ray fluxes and spectra. A good knowledge of the in-orbit background environment is essential to assess the scientific performance of the mission and optimize the design of its main instrument, the Large Area Detector (LAD). In this paper the results of an extensive Geant-4 simulation of the instrumentwillbe discussed, showing the main contributions to the background and the design solutions for its reduction and control. Our results show that the current LOFT/LAD design is expected to meet its scientific requirement of a background rate equivalent to 10 mCrab in 2‒30 keV, achieving about 5 mCrab in the most important 2-10 keV energy band. Moreover, simulations show an anticipated modulation of the background rate as small as 10 % over the orbital timescale. The intrinsic photonic origin of the largest background component also allows for an efficient modelling, supported by an in-flight active monitoring, allowing to predict systematic residuals significantly better than the requirement of 1 %, and actually meeting the 0.25 % science goal.

  16. Large eddy simulation of a lifted turbulent jet flame

    SciTech Connect

    Ferraris, S.A.; Wen, J.X.

    2007-09-15

    The flame index concept for large eddy simulation developed by Domingo et al. [P. Domingo, L. Vervisch, K. Bray, Combust. Theory Modell. 6 (2002) 529-551] is used to capture the partially premixed structure at the leading point and the dual combustion regimes further downstream on a turbulent lifted flame, which is composed of premixed and nonpremixed flame elements each separately described under a flamelet assumption. Predictions for the lifted methane/air jet flame experimentally tested by Mansour [M.S. Mansour, Combust. Flame 133 (2003) 263-274] are made. The simulation covers a wide domain from the jet exit to the far flow field. Good agreement with the data for the lift-off height and the mean mixture fraction has been achieved. The model has also captured the double flames, showing a configuration similar to that of the experiment which involves a rich premixed branch at the jet center and a diffusion branch in the outer region which meet at the so-called triple point at the flame base. This basic structure is contorted by eddies coming from the jet exit but remains stable at the lift-off height. No lean premixed branches are observed in the simulation or and experiment. Further analysis on the stabilization mechanism was conducted. A distinction between the leading point (the most upstream point of the flame) and the stabilization point was made. The later was identified as the position with the maximum premixed heat release. This is in line with the stabilization mechanism proposed by Upatnieks et al. [A. Upatnieks, J. Driscoll, C. Rasmussen, S. Ceccio, Combust. Flame 138 (2004) 259-272]. (author)

  17. Reduced-Order Simulation of Large Accelerator Structures

    NASA Astrophysics Data System (ADS)

    Cooke, Simon

    2007-11-01

    Simulating electromagnetic waves inside finite periodic or almost periodic three-dimensional structures is important to research in linear particle acceleration, high power microwave generation, and photonic bandgap structures. While eigenmodes of periodic structures can be determined from analysis of a single unit cell, based on Floquet theory, the general case of aperiodic structures, with defects or non-uniform properties, typically requires 3D electromagnetic simulation of the entire structure. When the structure is large and high accuracy is necessary this can require high-performance computing techniques to obtain even a few eigenmodes [1]. To confront this problem, we describe an efficient, field-based algorithm that can accurately determine the complete eigenmode spectrum for extended aperiodic structures, up to some chosen frequency limit. The new method combines domain decomposition with a non-traditional, dual eigenmode representation of the fields local to each cell of the structure. Two related boundary value eigenproblems are solved numerically in each cell, with (a) electrically shielded, and (b) magnetically shielded interfaces, to determine a combined set of basis fields. By using the dual solutions in our field representation we accurately represent both the electric and magnetic surface currents that mediate coupling at the interfaces between adjacent cells. The solution is uniformly convergent, so that typically only a few modes are used in each cell. We present results from 3D simulations that demonstrate the speed and low computational needs of the algorithm. [1] Z. Li, et al, Nucl. Instrum. Methods Phys. Res., Sect. A 558 (2006), 168-174.

  18. Large eddy simulation of controlled transition to turbulence

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Moin, Parviz

    2012-11-01

    Large eddy simulation of H- and K-type transitions in a spatially developing zero-pressure-gradient boundary layer at Ma∞ = 0.2 is investigated using several subgrid scale (SGS) models including constant coefficient Smagorinsky and Vreman models and their dynamic extensions, dynamic mixed scale-similarity, dynamic one-equation kinetic energy model, and global coefficient Vreman models. A key objective of this study is to assess the capability of SGS models to predict the location of transition and the skin friction throughout the transition process. The constant coefficient models fail to detect transition, but the dynamic procedure allows for a negligible turbulent viscosity in the early transition region. As a result, the "point" of transition is estimated correctly. However, after secondary instabilities set in and result in the overshoot in the skin friction profile, all models fail to produce sufficient subgrid scale shear stress required for the correct prediction of skin friction and the mean velocity profile. The same underprediction of skin friction persists into the turbulent region. Spatially filtered direct numerical simulation data in the same boundary layers are used to provide guidelines for SGS model development and validation.

  19. Large eddy simulation of boundary layer flow under cnoidal waves

    NASA Astrophysics Data System (ADS)

    Li, Yin-Jun; Chen, Jiang-Bo; Zhou, Ji-Fu; Zhang, Qiang

    2016-02-01

    Water waves in coastal areas are generally nonlinear, exhibiting asymmetric velocity profiles with different amplitudes of crest and trough. The behaviors of the boundary layer under asymmetric waves are of great significance for sediment transport in natural circumstances. While previous studies have mainly focused on linear or symmetric waves, asymmetric wave-induced flows remain unclear, particularly in the flow regime with high Reynolds numbers. Taking cnoidal wave as a typical example of asymmetric waves, we propose to use an infinite immersed plate oscillating cnoidally in its own plane in quiescent water to simulate asymmetric wave boundary layer. A large eddy simulation approach with Smagorinsky subgrid model is adopted to investigate the flow characteristics of the boundary layer. It is verified that the model well reproduces experimental and theoretical results. Then a series of numerical experiments are carried out to study the boundary layer beneath cnoidal waves from laminar to fully developed turbulent regimes at high Reynolds numbers, larger than ever studied before. Results of velocity profile, wall shear stress, friction coefficient, phase lead between velocity and wall shear stress, and the boundary layer thickness are obtained. The dependencies of these boundary layer properties on the asymmetric degree and Reynolds number are discussed in detail.

  20. Possible modifications to implicit large-eddy simulation

    NASA Astrophysics Data System (ADS)

    McDonough, J. M.

    2009-11-01

    Implicit large-eddy simulation (ILES) provides an advantage over more usual LES approaches in that its construction does not involve filtering of the governing equations and, as a consequence, removal of the need to develop sub-grid scale (SGS) models to represent artificial stresses arising from this filtering. At the same time, it is clear that ILES is simply an under-resolved direct numerical simulation with advanced treatments of advection terms to better control numerical stability via dissipation that otherwise would have been provided by a SGS model. As such it cannot be expected to accurately predict interactions of fluid turbulence with other physical phenomena (e.g., heat and mass transfer, chemical kinetics) on subgrid scales---as is also true of usual forms of LES. In this talk we describe a straightforward technique, based on formal multi-scale methods, whereby SGS interactions can be introduced to enhance resolved-scale results computed as in ILES, and we discuss derivation of a class of efficient models based on the ``poor man's Navier--Stokes equation'' (McDonough, Phys.Rev. E 79, 2009; McDonough and Huang, Int.J.Numer. Meth. Fluids 44, 2004). Properties of these models will be presented for a moderate-Re 3-D lid-driven cavity problem.

  1. Numerical techniques for large cosmological N-body simulations

    NASA Technical Reports Server (NTRS)

    Efstathiou, G.; Davis, M.; White, S. D. M.; Frenk, C. S.

    1985-01-01

    Techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe are described and compared. The accuracy of the forces derived from several commonly used particle mesh schemes is examined, showing how submesh resolution can be achieved by including short-range forces between particles by direct summation techniques. The time integration of the equations of motion is discussed, and the accuracy of the codes for various choices of 'time' variable and time step is tested by considering energy conservation as well as by direct analysis of particle trajectories. Methods for generating initial particle positions and velocities corresponding to a growing mode representation of a specified power spectrum of linear density fluctuations are described. The effects of force resolution are studied and different simulation schemes are compared. An algorithm is implemented for generating initial conditions by varying the number of particles, the initial amplitude of density fluctuations, and the initial peculiar velocity field.

  2. Large-eddy simulation of density currents on inclined beds

    NASA Astrophysics Data System (ADS)

    Chawdhary, Saurabh; Khosronejad, Ali; Christodoulou, George; Sotiropoulos, Fotis

    2013-11-01

    Density currents are stratified flow in presence of density differential and gravity field. We carry out Large-Eddy Simulation (LES) to simulate the flow of a density current formed over sloped bed due to an incoming jet of heavy density salty water for two different cases of bed slope: (a) 5 degrees and (b) 15 degrees. The Reynolds and Richardson numbers based on inlet height and inlet velocity were (a) 1100 and 0.471, and (b) 2000 and 0.0355, respectively. The Schmidt number is set equal to 620, which corresponds to the value for salt-water. The computed results are compared with laboratory experiments in terms of overall shape of the heavy-density plume and its spreading rate and are shown to be in reasonable agreement. The instantaneous LES flow fields are further analyzed to gain novel insights into the rich dynamics of coherent vortical structures in the flow. The half-width of the plume is plotted as a function of downstream length and found to exhibit three different regions on a log scale, in agreement with previous experimental findings. We acknowledge computational support from the Minnesota Supercomputing Institute.

  3. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  4. A subfilter-scale stress model for large eddy simulations

    NASA Astrophysics Data System (ADS)

    Rouhi, Amirreza; Piomelli, Ugo

    2013-11-01

    In most large eddy simulations, the filter width is related to the grid. This method of specification, however, causes problems in complex flows where local refinement results in grid discontinuities. Following the work of Piomelli and Geurts (Proce. 8th Workshop on DLES, 2010) we propose an eddy-viscosity approach in which the filter width is based on the flow parameters only, with no explicit relationship to the grid size. This model can achieve grid-independent LES solutions, vanishing dynamically in the regions of low turbulence activity and a computational cost less than the dynamic models. The Successive Inverse Polynomial Interpolation (Geurts & Meyers Phys. Fluids 18, 2006) was used to calculate the model parameter. Calculating implicitly the eddy-viscosity at each time-step removes the numerical instabilities found in previous studies, while maintaining the local character of the model. Results of simulations of channel flow at Reτ up to 2,000, and forced homogeneous isotropic turbulence will be presented.

  5. Surface detection, meshing and analysis during large molecular dynamics simulations

    SciTech Connect

    Dupuy, L M; Rudd, R E

    2005-08-01

    New techniques are presented for the detection and analysis of surfaces and interfaces in atomistic simulations of solids. Atomistic and other particle-based simulations have no inherent notion of a surface, only atomic positions and interactions. The algorithms we introduce here provide an unambiguous means to determine which atoms constitute the surface, and the list of surface atoms and a tessellation (meshing) of the surface are determined simultaneously. The algorithms have been implemented and demonstrated to run automatically (on the fly) in a large-scale parallel molecular dynamics (MD) code on a supercomputer. We demonstrate the validity of the method in three applications in which the surfaces and interfaces evolve: void surfaces in ductile fracture, the surface morphology due to significant plastic deformation of a nanoscale metal plate, and the interfaces (grain boundaries) and void surfaces in a nanoscale polycrystalline system undergoing ductile failure. The technique is found to be quite robust, even when the topology of the surfaces changes as in the case of void coalescence where two surfaces merge into one. It is found to add negligible computational overhead to an MD code, and is much less expensive than other techniques such as the solvent-accessible surface.

  6. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    SciTech Connect

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.

  7. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  8. Silt motion simulation using finite volume particle method

    NASA Astrophysics Data System (ADS)

    Jahanbakhsh, E.; Vessaz, C.; Avellan, F.

    2014-03-01

    In this paper, we present a 3-D FVPM which features rectangular top-hat kernels. With this method, interaction vectors are computed exactly and efficiently. We introduce a new method to enforce the no-slip boundary condition. With this boundary enforcement, the interaction forces between fluid and wall are computed accurately. We employ the boundary force to predict the motion of rigid spherical silt particles inside the fluid. To validate the model, we simulate the 2-D sedimentation of a single particle in viscous fluid tank and compare results with benchmark data. The particle resolution is verified by convergence study. We also simulate the sedimentation of two particles exhibiting drafting, kissing and tumbling phenomena in 2-D and 3-D. We compare the results with other numerical solutions.

  9. Shuttle vehicle and mission simulation requirements report, volume 1

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1972-01-01

    The requirements for the space shuttle vehicle and mission simulation are developed to analyze the systems, mission, operations, and interfaces. The requirements are developed according to the following subject areas: (1) mission envelope, (2) orbit flight dynamics, (3) shuttle vehicle systems, (4) external interfaces, (5) crew procedures, (6) crew station, (7) visual cues, and (8) aural cues. Line drawings and diagrams of the space shuttle are included to explain the various systems and components.

  10. Evaluation of the pressure-volume-temperature (PVT) data of water from experiments and molecular simulations since 1990

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Hu, Jiawen; Mao, Shide; Zhang, Zhigang

    2015-08-01

    Since 1990, many groups of pressure-volume-temperature (PVT) data from experiments and molecular dynamics (MD) or Monte Carlo (MC) simulations have been reported for supercritical and subcritical water. In this work, fifteen groups of PVT data (253.15-4356 K and 0-90.5 GPa) are evaluated in detail with the aid of the highly accurate IAPWS-95 formulation. The evaluation gives the following results: (1) Six datasets are found to be of good accuracy. They include the simulated results based on SPCE potential above 100 MPa and those derived from sound velocity measurements, but the simulated results below 100 MPa have large uncertainties. (2) The data from measurements with a piston cylinder apparatus and simulations with an exp-6 potential contain large uncertainties and systematic deviations. (3) The other seven datasets show obvious systematic deviations. They include those from experiments with synthesized fluid inclusion techniques (three groups), measured velocities of sound (one group), and automated high-pressure dilatometer (one group) and simulations with TIP4P potential (two groups), where the simulated data based on TIP4P potential below 200 MPa have large uncertainties. (4) The simulated data but those below 1 GPa agree with each other within 2-3%, and mostly within 2%. The data from fluid inclusions show similar systematic deviations, which are less than 2-5%. The data obtained with automated high-pressure dilatometer and those derived from sound velocity measurements agree with each other within 0.3-0.6% in most cases, except for those above 10 GPa. In principle, the systematic deviations mentioned above, except for those of the simulated data below 1 GPa, can be largely eliminated or significantly reduced by appropriate corrections, and then the accuracy of the relevant data can be improved significantly. These are very important for the improvement of experiments or simulations and the refinement and correct use of the PVT data in developing

  11. WEST-3 wind turbine simulator development. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    This report is a summary description of WEST-3, a new real-time wind turbine simulator developed by Paragon Pacific Inc. WEST-3 is an all digital, fully programmable, high performance parallel processing computer. Contained in the report are descriptions of the WEST-3 hardware and software. WEST-3 consists of a network of Computational Units (CUs) working in parallel. Each CU is a custom designed high speed digital processor operating independently of other CUs. The CU, which is the main building block of the system, is described in some detail. A major contributor to the high performance of the system is the use a unique method for transferring data among the CUs. The software aspects of WEST-3 covered in the report include the preparation of the simulation model (reformulation, scaling and normalization), and the use of the system software (Translator, Linker, Assembler and Loader). Also given is a description of the wind turbine simulation model used in WEST-3, and some sample results from a study conducted to validate the system. Finally, efforts currently underway to enhance the user friendliness of the system are outlined; these include the 32-bit floating point capability, and major improvements in system software.

  12. The two axis motion simulator for the large space simulator at ESTEC (European Space Research and Technology Center)

    NASA Technical Reports Server (NTRS)

    Beckel, Kurt A.; Hutchison, Joop

    1988-01-01

    The Large Space Simulator at the European Space Research and Technology Center (ESTEC) has been recently equipped with a motion simulator capable of handling test items of 5 tons mass and having a volume of 7m in diameter and a length of 7m. The motion simulator has a modular set-up. It consists of a spinbox as a basic unit on which the test article is mounted and which allows continuous rotation (spin) . This spinbox can be used in two operational configurations; the spin axis is vertical to 30 degrees when mounted on a gimbalstand; and the spin axis is horizontal when mounted on a turntable-yoke combination. The turntable provides rotation within plus or minus 90 degrees. This configuration allows one to bring a test article to all possible relative positions viv-a-vis the sun vector (which is horizontal in this case). The spinbox allows fast rotation between 1 to 6 rpm or slow rotation between 1 to 25 rotations per day as well as positioning within plus or minus 0.4 degrees accuracy.

  13. Large Eddy Simulation and the Filtered Probability Density Function Method

    NASA Astrophysics Data System (ADS)

    Jones, W. P.; Navarro-Martinez, S.

    2009-12-01

    Recently there is has been increased interest in modelling combustion processes with high-levels of extinction and re-ignition. Such system often lie beyond the scope of conventional single scalar-based models. Large Eddy Simulation (LES) has shown a large potential for describing turbulent reactive systems, though combustion occurs at the smallest unresolved scales of the flow and must be modelled. In the sub-grid Probability Density Function (pdf) method approximations are devised to close the evolution equation for the joint-pdf which is then solved directly. The paper describes such an approach and concerns, in particular, the Eulerian stochastic field method of solving the pdf equation. The paper examines the capabilities of the LES-pdf method in capturing auto-ignition and extinction events in different partially premixed configurations with different fuels (hydrogen, methane and n-heptane). The results show that the LES-pdf formulation can capture different regimes without any parameter adjustments, independent of Reynolds numbers and fuel type.

  14. Large eddy simulation modelling of combustion for propulsion applications.

    PubMed

    Fureby, C

    2009-07-28

    Predictive modelling of turbulent combustion is important for the development of air-breathing engines, internal combustion engines, furnaces and for power generation. Significant advances in modelling non-reactive turbulent flows are now possible with the development of large eddy simulation (LES), in which the large energetic scales of the flow are resolved on the grid while modelling the effects of the small scales. Here, we discuss the use of combustion LES in predictive modelling of propulsion applications such as gas turbine, ramjet and scramjet engines. The LES models used are described in some detail and are validated against laboratory data-of which results from two cases are presented. These validated LES models are then applied to an annular multi-burner gas turbine combustor and a simplified scramjet combustor, for which some additional experimental data are available. For these cases, good agreement with the available reference data is obtained, and the LES predictions are used to elucidate the flow physics in such devices to further enhance our knowledge of these propulsion systems. Particular attention is focused on the influence of the combustion chemistry, turbulence-chemistry interaction, self-ignition, flame holding burner-to-burner interactions and combustion oscillations. PMID:19531515

  15. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not

  16. Shuttle mission simulator. Volume 2: Requirement report, volume 2, revision C

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The requirements for space shuttle simulation which are discussed include: general requirements, program management, system engineering, design and development, crew stations, on-board computers, and systems integration. For Vol. 1, revision A see N73-22203, for Vol 2, revision A see N73-22204.

  17. Large-N volume independence in conformal and confining gauge theories

    SciTech Connect

    Unsal, Mithat; Yaffe, Laurence G.; /Washington U., Seattle

    2010-08-26

    Consequences of large N volume independence are examined in conformal and confining gauge theories. In the large N limit, gauge theories compactified on R{sup d-k} x (S{sup 1}){sup k} are independent of the S{sup 1} radii, provided the theory has unbroken center symmetry. In particular, this implies that a large N gauge theory which, on R{sup d}, flows to an IR fixed point, retains the infinite correlation length and other scale invariant properties of the decompactified theory even when compactified on R{sup d-k} x (S{sup 1}){sup k}. In other words, finite volume effects are 1/N suppressed. In lattice formulations of vector-like theories, this implies that numerical studies to determine the boundary between confined and conformal phases may be performed on one-site lattice models. In N = 4 supersymmetric Yang-Mills theory, the center symmetry realization is a matter of choice: the theory on R{sup 4-k} x (S{sup 1}){sup k} has a moduli space which contains points with all possible realizations of center symmetry. Large N QCD with massive adjoint fermions and one or two compactified dimensions has a rich phase structure with an infinite number of phase transitions coalescing in the zero radius limit.

  18. Detection and Volume Estimation of Large Landslides by Using Multi-temporal Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Hsieh, Yu-chung; Hou, Chin-Shyong; Chan, Yu-Chang; Hu, Jyr-Ching; Fei, Li-Yuan; Chen, Hung-Jen; Chiu, Cheng-Lung

    2014-05-01

    Large landslides are frequently triggered by strong earthquakes and heavy rainfalls in the mountainous areas of Taiwan. The heavy rainfall brought by the Typhoon Morakot has triggered a large amount of landslides. The most unfortunate case occurred in the Xiaolin village, which was totally demolished by a catastrophic landslide in less than a minute. Continued and detailed study of the characteristics of large landslides is urgently needed to mitigate loss of lives and properties in the future. Traditionally known techniques cannot effectively extract landslide parameters, such as depth, amount and volume, which are essential in all the phases of landslide assessment. In addition, it is very important to record the changes of landslide deposits after the landslide events as accurately as possible to better understand the landslide erosion process. The acquisition of digital elevation models (DEMs) is considered necessary for achieving accurate, effective and quantitative landslide assessments. A new technique is presented in this study for quickly assessing extensive areas of large landslides. The technique uses DEMs extracted from several remote sensing approaches, including aerial photogrammetry, airborne LiDAR and UAV photogrammetry. We chose a large landslide event that occurred after Typhoon Sinlaku in Meiyuan the mount, central Taiwan in 2008. We collected and processed six data sets, including aerial photos, airborne LiDAR data and UAVphotos, at different times from 2005 to 2013. Our analyses show the landslide volume being 17.14 × 106 cubic meters, deposition volume being 12.75 × 106 cubic meters, and about 4.38 × 106 cubic meters being washed out of the region. Residual deposition ratio of this area is about 74% in 2008; while, after a few years, the residual deposition ratio is down below 50%. We also analyzed riverbed changes and sediment transfer patterns from 2005 to 2013 by multi-temporal remote sensing data with desirable accuracy. The developed

  19. Volume average technique for turbulent flow simulation and its application to room airflow prediction

    NASA Astrophysics Data System (ADS)

    Huang, Xianmin

    Fluid motion turbulence is one of the most important transport phenomena occurring in engineering applications. Although turbulent flow is governed by a set of conservation equations for momentum, mass, and energy, a Direct Numerical Simulation (DNS) of the flow by solving these equations to include the finest scale motions is impossible due to the extremely large computer resources required. On the other hand, the Reynolds Averaged Modelling (RAM) method has many limitations which hinder its applications to turbulent flows of practical significance. Room airflow featuring co- existence of laminar and turbulence regimes is a typical example of a flow which is difficult to handle with the RAM method. A promising way to avoid the difficulty of the DNS method and the limitation of the RAM method is to use the Large Eddy Simulation (LES) method. In the present thesis, the drawbacks of previously developed techniques for the LES method, particularly those associated with the SGS modelling, are identified. Then a new so called Volume Average Technique (VAT) for turbulent flow simulation is proposed. The main features of the VAT are as follows: (1) The volume averaging approach instead of the more common filtering approach is employed to define solvable scale fields, so that coarse- graining in the LES and space discretization of the numerical scheme are achieved in a single procedure. (2) All components of the SGS Reynolds stress and SGS turbulent heat flux are modelled dynamically using the newly proposed Functional Scale Similarity (FSS) SGS model. The model is superior to many previously developed SGS models in that it can be applied to highly inhomogeneous and/or anisotropic, weak or multi-regime turbulent flows using a relatively coarse grid. (3) The so called SGS turbulent diffusion is identified and modelled as a separate mechanism to that of the SGS turbulent flux represented by the SGS Reynolds stress and SGS turbulent heat flux. The SGS turbulent diffusion is

  20. Numerical aerodynamic simulation facility preliminary study, volume 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A technology forecast was established for the 1980-1985 time frame and the appropriateness of various logic and memory technologies for the design of the numerical aerodynamic simulation facility was assessed. Flow models and their characteristics were analyzed and matched against candidate processor architecture. Metrics were established for the total facility, and housing and support requirements of the facility were identified. An overview of the system is presented, with emphasis on the hardware of the Navier-Stokes solver, which is the key element of the system. Software elements of the system are also discussed.

  1. Monte Carlo simulation of an x-ray volume imaging cone beam CT unit

    SciTech Connect

    Spezi, Emiliano; Downes, Patrick; Radu, Emil; Jarvis, Richard

    2009-01-15

    In this work the authors characterized the radiation field produced by a kilovolt cone beam computed tomography (CBCT) unit integrated in the Elekta Synergy linear accelerator. The x-ray volume imaging (XVI) radiation unit was modeled in detail using the BEAMNRC Monte Carlo (MC) code system. The simulations of eight collimator cassettes and the neutral filter F0 were successfully carried out. MC calculations from the EGSNRC code DOSXYZNRC were benchmarked against measurements in water. A large set of depth dose and lateral profiles was acquired with the ionization chamber in water, with the x-ray tube in a stationary position, and with the beam energy set to 120 kV. Measurements for all the available collimator cassettes were compared with calculations, showing very good agreement (<2% in most cases). Furthermore, half value layer measurements were carried out and used to validate the MC model of the XVI unit. In this case dose calculations were performed with the EGSNRC code cavity and these showed excellent agreement. In this manuscript the authors also report on the optimization work of the relevant parameters that influenced the development of the MC model. The dosimetric part of this work was very useful in characterizing the XVI radiation output for the energy of interest. The detailed simulation part of the work is the first step toward an accurate MC based assessment of the dose delivered to patients during routine CBCT scans for image and dose guided radiotherapy.

  2. Numerical simulation of fluid-structure interaction with the volume penalization method

    NASA Astrophysics Data System (ADS)

    Engels, Thomas; Kolomenskiy, Dmitry; Schneider, Kai; Sesterhenn, Jörn

    2015-01-01

    We present a novel scheme for the numerical simulation of fluid-structure interaction problems. It extends the volume penalization method, a member of the family of immersed boundary methods, to take into account flexible obstacles. We show how the introduction of a smoothing layer, physically interpreted as surface roughness, allows for arbitrary motion of the deformable obstacle. The approach is carefully validated and good agreement with various results in the literature is found. A simple one-dimensional solid model is derived, capable of modeling arbitrarily large deformations and imposed motion at the leading edge, as it is required for the simulation of simplified models for insect flight. The model error is shown to be small, while the one-dimensional character of the model features a reasonably easy implementation. The coupled fluid-solid interaction solver is shown not to introduce artificial energy in the numerical coupling, and validated using a widely used benchmark. We conclude with the application of our method to models for insect flight and study the propulsive efficiency of one and two wing sections.

  3. A pyramid-based approach to visual exploration of a large volume of vehicle trajectory data

    NASA Astrophysics Data System (ADS)

    Sun, Jing; Li, Xiang

    2012-12-01

    Advances in positioning and wireless communicating technologies make it possible to collect large volumes of trajectory data of moving vehicles in a fast and convenient fashion. These data can be applied to traffic studies. Behind this application, a methodological issue that still requires particular attention is the way these data should be spatially visualized. Trajectory data physically consists of a large number of positioning points. With the dramatic increase of data volume, it becomes a challenge to display and explore these data. Existing commercial software often employs vector-based indexing structures to facilitate the display of a large volume of points, but their performance downgrades quickly when the number of points is very large, for example, tens of millions. In this paper, a pyramid-based approach is proposed. A pyramid method initially is invented to facilitate the display of raster images through the tradeoff between storage space and display time. A pyramid is a set of images at different levels with different resolutions. In this paper, we convert vector-based point data into raster data, and build a gridbased indexing structure in a 2D plane. Then, an image pyramid is built. Moreover, at the same level of a pyramid, image is segmented into mosaics with respect to the requirements of data storage and management. Algorithms or procedures on grid-based indexing structure, image pyramid, image segmentation, and visualization operations are given in this paper. A case study with taxi trajectory data in Shanghai is conducted. Results demonstrate that the proposed method outperforms the existing commercial software.

  4. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.

    1991-01-01

    This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.

  5. Structural organization of large and very-large scales in turbulent pipe flow simulation

    NASA Astrophysics Data System (ADS)

    Baltzer, Jon; Adrian, Ronald; Wu, Xiaohua

    2012-11-01

    The physical structures of velocity are examined in a recent DNS of fully developed incompressible turbulent pipe flow at ReD = 24 580 (R+ = 684 . 8) with a periodic domain length of 30 pipe radii R (Wu, Baltzer, & Adrian, J. Fluid Mech., 2012). In this simulation, the long motions of negative velocity fluctuation correspond to large fractions of energy present at very long streamwise wavelengths (>= 3 R). We study how long motions are composed of smaller motions. We characterize the spatial arrangements of very large scale motions (VLSMs) and find that they possess dominant helix angles (azimuthal inclinations relative to streamwise) that are revealed by 2D and 3D two-point spatial correlations of velocity. The correlations also reveal that the shorter, large scale motions (LSMs) that concatenate to comprise the VLSMs are themselves more streamwise aligned. We show that the largest VLSMs possess a form similar to roll cells and that they appear to play an important role in organizing the flow, while smaller scales of motion are necessary to create the strong streaks of velocity fluctuation that characterize the flow. Supported by NSF Award CBET-0933848.

  6. Volume-staged radiosurgery for large arteriovenous malformations: an evolving paradigm.

    PubMed

    Seymour, Zachary A; Sneed, Penny K; Gupta, Nalin; Lawton, Michael T; Molinaro, Annette M; Young, William; Dowd, Christopher F; Halbach, Van V; Higashida, Randall T; McDermott, Michael W

    2016-01-01

    OBJECT Large arteriovenous malformations (AVMs) remain difficult to treat, and ideal treatment parameters for volume-staged stereotactic radiosurgery (VS-SRS) are still unknown. The object of this study was to compare VS-SRS treatment outcomes for AVMs larger than 10 ml during 2 eras; Era 1 was 1992-March 2004, and Era 2 was May 2004-2008. In Era 2 the authors prospectively decreased the AVM treatment volume, increased the radiation dose per stage, and shortened the interval between stages. METHODS All cases of VS-SRS treatment for AVM performed at a single institution were retrospectively reviewed. RESULTS Of 69 patients intended for VS-SRS, 63 completed all stages. The median patient age at the first stage of VS-SRS was 34 years (range 9-68 years). The median modified radiosurgery-based AVM score (mRBAS), total AVM volume, and volume per stage in Era 1 versus Era 2 were 3.6 versus 2.7, 27.3 ml versus 18.9 ml, and 15.0 ml versus 6.8 ml, respectively. The median radiation dose per stage was 15.5 Gy in Era 1 and 17.0 Gy in Era 2, and the median clinical follow-up period in living patients was 8.6 years in Era 1 and 4.8 years in Era 2. All outcomes were measured from the first stage of VS-SRS. Near or complete obliteration was more common in Era 2 (log-rank test, p = 0.0003), with 3- and 5-year probabilities of 5% and 21%, respectively, in Era 1 compared with 24% and 68% in Era 2. Radiosurgical dose, AVM volume per stage, total AVM volume, era, compact nidus, Spetzler-Martin grade, and mRBAS were significantly associated with near or complete obliteration on univariate analysis. Dose was a strong predictor of response (Cox proportional hazards, p < 0.001, HR 6.99), with 3- and 5-year probabilities of near or complete obliteration of 5% and 16%, respectively, at a dose < 17 Gy versus 23% and 74% at a dose ≥ 17 Gy. Dose per stage, compact nidus, and total AVM volume remained significant predictors of near or complete obliteration on multivariate analysis. Seventeen

  7. Finite volume simulation for convective heat transfer in wavy channels

    NASA Astrophysics Data System (ADS)

    Aslan, Erman; Taymaz, Imdat; Islamoglu, Yasar

    2016-03-01

    The convective heat transfer characteristics for a periodic wavy channel have been investigated experimentally and numerically. Finite volume method was used in numerical study. Experiment results are used for validation the numerical results. Studies were conducted for air flow conditions where contact angle is 30°, and uniform heat flux 616 W/m2 is applied as the thermal boundary conditions. Reynolds number ( Re) is varied from 2000 to 11,000 and Prandtl number ( Pr) is taken 0.7. Nusselt number ( Nu), Colburn factor ( j), friction factor ( f) and goodness factor ( j/ f) against Reynolds number have been studied. The effects of the wave geometry and minimum channel height have been discussed. Thus, the best performance of flow and heat transfer characterization was determined through wavy channels. Additionally, it was determined that the computed values of convective heat transfer coefficients are in good correlation with experimental results for the converging diverging channel. Therefore, numerical results can be used for these channel geometries instead of experimental results.

  8. Very Large Area/Volume Microwave ECR Plasma and Ion Source

    NASA Technical Reports Server (NTRS)

    Foster, John E. (Inventor); Patterson, Michael J. (Inventor)

    2009-01-01

    The present invention is an apparatus and method for producing very large area and large volume plasmas. The invention utilizes electron cyclotron resonances in conjunction with permanent magnets to produce dense, uniform plasmas for long life ion thruster applications or for plasma processing applications such as etching, deposition, ion milling and ion implantation. The large area source is at least five times larger than the 12-inch wafers being processed to date. Its rectangular shape makes it easier to accommodate to materials processing than sources that are circular in shape. The source itself represents the largest ECR ion source built to date. It is electrodeless and does not utilize electromagnets to generate the ECR magnetic circuit, nor does it make use of windows.

  9. The complex aerodynamic footprint of desert locusts revealed by large-volume tomographic particle image velocimetry

    PubMed Central

    Henningsson, Per; Michaelis, Dirk; Nakata, Toshiyuki; Schanz, Daniel; Geisler, Reinhard; Schröder, Andreas; Bomphrey, Richard J.

    2015-01-01

    Particle image velocimetry has been the preferred experimental technique with which to study the aerodynamics of animal flight for over a decade. In that time, hardware has become more accessible and the software has progressed from the acquisition of planes through the flow field to the reconstruction of small volumetric measurements. Until now, it has not been possible to capture large volumes that incorporate the full wavelength of the aerodynamic track left behind during a complete wingbeat cycle. Here, we use a unique apparatus to acquire the first instantaneous wake volume of a flying animal's entire wingbeat. We confirm the presence of wake deformation behind desert locusts and quantify the effect of that deformation on estimates of aerodynamic force and the efficiency of lift generation. We present previously undescribed vortex wake phenomena, including entrainment around the wing-tip vortices of a set of secondary vortices borne of Kelvin–Helmholtz instability in the shear layer behind the flapping wings. PMID:26040598

  10. Generation of large volume hydrostatic pressure to 8 GPa for ultrasonic studies

    NASA Astrophysics Data System (ADS)

    Kozuki, Yasushi; Yoneda, Akira; Fujimura, Akio; Sawamoto, Hiroshi; Kumazawa, Mineo

    1986-09-01

    The design and performance of a liquid-solid hybrid cell to generate high hydrostatic pressures in a relatively large volume (for use in measurements of the pressure dependence of the physical properties of materials) are reported. A 4:1 methanol-ethanol mixture is employed in 12-mm-side and 20-mm-side versions of an eight-cubic-anvil apparatus driven by a 10-kt press. Pressures up to 8 GPa are obtained safely in a 16-cu cm volume by applying uniaxial force of 3 kt. The cell is used to obtain measurements of the velocity of ultrasonic waves in fused quartz: the experimental setup is described, and sample results are presented graphically.

  11. Digital fringe projection system for large-volume 360-deg shape measurement

    NASA Astrophysics Data System (ADS)

    Sitnik, Robert; Kujawinska, Malgorzata; Woznicki, Jerzy M.

    2002-02-01

    We present a system for 3-D shape measurement in large volumes based on combined digital-fringe--Gray-code projection. With the help of a new calibration procedure, the system provides accurate results despite its crossed-axis configuration and unknown aberrations of the digital light projector and CCD camera. Also, the separate clouds of points captured from different directions are automatically merged into the main cloud. The system delivers results in the form of (x,y,z) coordinates of the object points with additional (R,G,B) color information about their texture. Applicability of the system is proven by presenting sample results of measurements performed on complex objects. The uncertainty of the system was estimated at 10 -4 of the measurement volume.

  12. The complex aerodynamic footprint of desert locusts revealed by large-volume tomographic particle image velocimetry.

    PubMed

    Henningsson, Per; Michaelis, Dirk; Nakata, Toshiyuki; Schanz, Daniel; Geisler, Reinhard; Schröder, Andreas; Bomphrey, Richard J

    2015-07-01

    Particle image velocimetry has been the preferred experimental technique with which to study the aerodynamics of animal flight for over a decade. In that time, hardware has become more accessible and the software has progressed from the acquisition of planes through the flow field to the reconstruction of small volumetric measurements. Until now, it has not been possible to capture large volumes that incorporate the full wavelength of the aerodynamic track left behind during a complete wingbeat cycle. Here, we use a unique apparatus to acquire the first instantaneous wake volume of a flying animal's entire wingbeat. We confirm the presence of wake deformation behind desert locusts and quantify the effect of that deformation on estimates of aerodynamic force and the efficiency of lift generation. We present previously undescribed vortex wake phenomena, including entrainment around the wing-tip vortices of a set of secondary vortices borne of Kelvin-Helmholtz instability in the shear layer behind the flapping wings. PMID:26040598

  13. Dynamic dialysis: an efficient technique for large-volume sample desalting.

    PubMed

    Yuan, Peng; Le, Zhen; Zhong, Lipeng; Huang, Chunhong

    2015-08-18

    Dialysis is a well-known technique for laboratory separation. However, its efficiency is commonly restricted by the dialyzer volume and its passive diffusion manner. In addition, the sample is likely to be precipitated and inactive during a long dialysis process. To overcome these drawbacks, a dynamic dialysis method was described and evaluated. The dynamic dialysis was performed by two peristaltic pumps working in reverse directions, in order to drive countercurrent parallel flow of sample and buffer, respectively. The efficiency and capacity of this dynamic dialysis method was evaluated by recording and statistically comparing the variation of conductance from retentate under different conditions. The dynamic method was proven to be effective in dialyzing a large-volume sample, and its efficiency changes proportionally to the flow rate of sample. To sum up, circulating the sample and the buffer creates the highest possible concentration gradient to significantly improve dialysis capacity and shorten dialysis time. PMID:25036273

  14. Mechanically Cooled Large-Volume Germanium Detector Systems for Neclear Explosion Monitoring DOENA27323-2

    SciTech Connect

    Hull, E.L.

    2006-10-30

    Compact maintenance free mechanical cooling systems are being developed to operate large volume high-resolution gamma-ray detectors for field applications. To accomplish this we are utilizing a newly available generation of Stirling-cycle mechanical coolers to operate the very largest volume germanium detectors with no maintenance. The user will be able to leave these systems unplugged on the shelf until needed. The maintenance-free operating lifetime of these detector systems will exceed 5 years. Three important factors affect the operation of mechanically cooled germanium detectors: temperature, vacuum, and vibration. These factors will be studied in the laboratory at the most fundamental levels to insure a solid understanding of the physical limitations each factor places on a practical mechanically cooled germanium detector system. Using this knowledge, mechanically cooled germanium detector prototype systems will be designed and fabricated.

  15. Large-eddy simulation for the prediction of supersonic rectangular jet noise

    NASA Astrophysics Data System (ADS)

    Nichols, Joseph W.; Ham, Frank E.; Lele, Sanjiva K.; Bridges, James E.

    2011-11-01

    We investigate the noise from isothermal and heated under-expanded supersonic turbulent jets issuing from a rectangular nozzle of aspect ratio 4:1 using high-fidelity unstructured large-eddy simulation (LES) and acoustic projection based on the Ffowcs-Williams Hawkings (FWH) equations. The nozzle/flow interaction is directly included by simulating the flow in and around the nozzle in addition to the jet plume downstream. A grid resolution study is performed and results are shown for unstructured meshes containing up to 300 million control volumes, generated by a massively parallel code scaled to as many as 65,536 processors. Validated against laboratory measurements using a nozzle of precisely the same geometry, we find that mesh isotropy is a key factor in determining the quality of the far-field aeroacoustic predictions. The full flow fields produced by the simulation, in conjunction with particle image velocimetry (PIV) data measured from experiment, allow for a detailed examination of the interaction of large-scale coherent flow features and the resultant far-field noise, and its subsequent modification in the presence of heating. Supported by NASA grant NNX07AC94A and PSAAP, with computational resources from a DoD HPCMP CAP-2 project.

  16. Film cooling from inclined cylindrical holes using large eddy simulations

    NASA Astrophysics Data System (ADS)

    Peet, Yulia V.

    2006-12-01

    The goal of the present study is to investigate numerically the physics of the flow, which occurs during the film cooling from inclined cylindrical holes, Film cooling is a technique used in gas turbine industry to reduce heat fluxes to the turbine blade surface. Large Eddy Simulation (LES) is performed modeling a realistic film cooling configuration, which consists of a large stagnation-type reservoir, feeding an array of discrete cooling holes (film holes) flowing into a flat plate turbulent boundary layer. Special computational methodology is developed for this problem, involving coupled simulations using multiple computational codes. A fully compressible LES code is used in the area above the flat plate, while a low Mach number LES code is employed in the plenum and film holes. The motivation for using different codes comes from the essential difference in the nature of the flow in these different regions. Flowfield is analyzed inside the plenum, film hole and a crossflow region. Flow inside the plenum is stagnating, except for the region close to the exit, where it accelerates rapidly to turn into the hole. The sharp radius of turning at the trailing edge of the plenum pipe connection causes the flow to separate from the downstream wall of the film hole. After coolant injection occurs, a complex flowfield is formed consisting of coherent vortical structures responsible for bringing hot crossflow fluid in contact with the walls of either the film hole or the blade, thus reducing cooling protection. Mean velocity and turbulent statistics are compared to experimental measurements, yielding good agreement for the mean flowfield and satisfactory agreement for the turbulence quantities. LES results are used to assess the applicability of basic assumptions of conventional eddy viscosity turbulence models used with Reynolds-averaged (RANS) approach, namely the isotropy of an eddy viscosity and thermal diffusivity. It is shown here that these assumptions do not hold

  17. Reynolds number scaling of coherent vortex simulation and stochastic coherent adaptive large eddy simulation

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Vasilyev, Oleg V.

    2013-11-01

    In view of the ongoing longtime pursuit of numerical approaches that can capture important flow physics of high Reynolds number flows with fewest degrees of freedom, two important wavelet-based multi-resolution schemes are thoroughly examined, namely, the Coherent Vortex Simulation (CVS) and the Stochastic Coherent Adaptive Large Eddy Simulation (SCALES) with constant and spatially/temporarily variable thresholding. Reynolds number scaling of active spatial modes for CVS and SCALES of linearly forced homogeneous turbulence at high Reynolds numbers is investigated in dynamic study for the first time. This dynamic computational complexity study demonstrates that wavelet-based methods can capture flow-physics while using substantially fewer degrees of freedom than both direct numerical simulation and marginally resolved LES with the same level of fidelity or turbulence resolution, defined as ratio of subgrid scale and the total dissipations. The study provides four important observations: (1) the linear Reynolds number scaling of energy containing structures at a fixed level of kinetic energy, (2) small, close to unity, fractal dimension for constant-threshold CVS and SCALES simulations, (3) constant, close to two, fractal dimension for constant-dissipation SCALES that is insensitive to the level of fidelity, and (4) faster than quadratic decay of the compression ratio as a function of turbulence resolution. The very promising slope for Reynolds number scaling of CVS and SCALES demonstrates the potential of the wavelet-based methodologies for hierarchical multiscale space/time adaptive variable fidelity simulations of high Reynolds number turbulent flows.

  18. Characterization of large volume 3.5″×8″ LaBr3:Ce detectors

    NASA Astrophysics Data System (ADS)

    Giaz, A.; Pellegri, L.; Riboldi, S.; Camera, F.; Blasi, N.; Boiano, C.; Bracco, A.; Brambilla, S.; Ceruti, S.; Coelli, S.; Crespi, F. C. L.; Csatlòs, M.; Frega, S.; Gulyàs, J.; Krasznahorkay, A.; Lodetti, S.; Million, B.; Owens, A.; Quarati, F.; Stuhl, L.; Wieland, O.

    2013-11-01

    The properties of large volume cylindrical 3.5″×8″ (89 mm×203 mm) LaBr3:Ce scintillation detectors coupled to the Hamamatsu R10233-100SEL photo-multiplier tube were investigated. These crystals are among the largest ones ever produced and still need to be fully characterized to determine how these detectors can be utilized and in which applications. We tested the detectors using monochromatic γ-ray sources and in-beam reactions producing γ rays up to 22.6 MeV; we acquired PMT signal pulses and calculated detector energy resolution and response linearity as a function of γ-ray energy. Two different voltage dividers were coupled to the Hamamatsu R10233-100SEL PMT: the Hamamatsu E1198-26, based on straightforward resistive network design, and the “LABRVD”, specifically designed for our large volume LaBr3:Ce scintillation detectors, which also includes active semiconductor devices. Because of the extremely high light yield of LaBr3:Ce crystals we observed that, depending on the choice of PMT, voltage divider and applied voltage, some significant deviation from the ideally proportional response of the detector and some pulse shape deformation appear. In addition, crystal non-homogeneities and PMT gain drifts affect the (measured) energy resolution especially in case of high-energy γ rays. We also measured the time resolution of detectors with different sizes (from 1″×1″ up to 3.5″×8″), correlating the results with both the intrinsic properties of PMTs and GEANT simulations of the scintillation light collection process. The detector absolute full energy efficiency was measured and simulated up to γ-rays of 30 MeV

  19. A Novel Technique for Endovascular Removal of Large Volume Right Atrial Tumor Thrombus

    SciTech Connect

    Nickel, Barbara; McClure, Timothy Moriarty, John

    2015-08-15

    Venous thromboembolic disease is a significant cause of morbidity and mortality, particularly in the setting of large volume pulmonary embolism. Thrombolytic therapy has been shown to be a successful treatment modality; however, its use somewhat limited due to the risk of hemorrhage and potential for distal embolization in the setting of large mobile thrombi. In patients where either thrombolysis is contraindicated or unsuccessful, and conventional therapies prove inadequate, surgical thrombectomy may be considered. We present a case of percutaneous endovascular extraction of a large mobile mass extending from the inferior vena cava into the right atrium using the Angiovac device, a venovenous bypass system designed for high-volume aspiration of undesired endovascular material. Standard endovascular methods for removal of cancer-associated thrombus, such as catheter-directed lysis, maceration, and exclusion, may prove inadequate in the setting of underlying tumor thrombus. Where conventional endovascular methods either fail or are unsuitable, endovascular thrombectomy with the Angiovac device may be a useful and safe minimally invasive alternative to open resection.

  20. Large-eddy simulation of an infinitely large wind farm in a stable atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Lu, H.; Porté-Agel, F.

    2010-09-01

    When deployed as large arrays, wind turbines interact among themselves and with atmospheric boundary layer. To optimize their geometric arrangements, accurate knowledge of wind-turbine array boundary layer is of great importance. In this study, we integrated large eddy simulation with an actuator line technique, and used it to study the characteristics of wind-turbine wake in an idealized wind farm inside a stably stratified atmospheric boundary layer (SBL). The wind turbines, with a rotor diameter of 112m and a tower height of 119m, were placed in a well-known SBL turbulent case that has a boundary layer height of approximately 180m. The super-geostrophic nocturnal jet near the top of the boundary layer was eliminated due to the energy extraction and the enhanced mixing of momentum. Non-axisymmetric behavior of wake structure was observed in response to the non-uniform incoming turbulence, the Coriolis effects, and the rotational effects induced by blade motions. The turbulence intensity in the simulated turbine wakes was found to reach a maximum at the top-tip level and a downwind distance of approximately 3-5 rotor diameters from the turbines. The Coriolis effects caused a skewed spatial structure and drove certain amount of turbulent energy away from the center of the wake. The SBL height was increased, while the magnitudes of the surface momentum flux and the surface buoyancy flux were reduced by approximately 30%. The wind farm was also found to have a strong effect on area-averaged vertical turbulent fluxes of momentum and heat, which highlights the potential impact of wind farms on local meteorology.

  1. GMP cryopreservation of large volumes of cells for regenerative medicine: active control of the freezing process.

    PubMed

    Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Fuller, Barry; Gibbons, Stephanie; Morris, G John

    2014-09-01

    Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to -60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze-viabilities at 93.4% ± 7.4%, viable cell numbers at 14.3 ± 1.7 million nuclei/mL alginate, and protein secretion at 10.5 ± 1.7

  2. Optimized algorithm module for large volume remote sensing image processing system

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Liu, Nan; Liu, Renyi; Wang, Jiawen; Zhang, Qin

    2007-12-01

    A new remote sensing image processing system's algorithm module has been introduced in this paper, which is coded with Visual C++ 6.0 program language and can process large volume of remote sensing image. At the same time, adopted key technologies in algorithm module are given. Two defects of American remote sensing image processing system called ERDAS has been put forward in image filter algorithm and the storage of pixel values that are out of data type range. In author's system two optimized methods has been implemented in these two aspects. By contrasted with ERDAS IMAGINE System, the two methods had been proved to be effective in image analysis.

  3. Leak testing of cryogenically pumped large-volume high-vacuum systems

    NASA Astrophysics Data System (ADS)

    Sherlock, Charles N.

    1988-01-01

    The problems that may occur in the cryogenically pumped large-volume high-vacuum chambers (LVHVCs), used for the environmental testing of aerospace components and systems, are examined. Consideration is given to the designs of the LVHVCs and the cryogenic pumps. In the procedure of leak testing with tracer gas, the success of testing depends on attaining the required test sensitivity with speed, economy, and reliability. The steps required to speed up the leak location phase of the leak testing procedure and to thoroughly clean every penetration (i.e., fitting or nozzle) of the system are discussed.

  4. Large Volume, Optical and Opto-Mechanical Metrology Techniques for ISIM on JWST

    NASA Technical Reports Server (NTRS)

    Hadjimichael, Theo

    2015-01-01

    The final, flight build of the Integrated Science Instrument Module (ISIM) element of the James Webb Space Telescope is the culmination of years of work across many disciplines and partners. This paper covers the large volume, ambient, optical and opto-mechanical metrology techniques used to verify the mechanical integration of the flight instruments in ISIM, including optical pupil alignment. We present an overview of ISIM's integration and test program, which is in progress, with an emphasis on alignment and optical performance verification. This work is performed at NASA Goddard Space Flight Center, in close collaboration with the European Space Agency, the Canadian Space Agency, and the Mid-Infrared Instrument European Consortium.

  5. GMP Cryopreservation of Large Volumes of Cells for Regenerative Medicine: Active Control of the Freezing Process

    PubMed Central

    Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Gibbons, Stephanie; Morris, G. John

    2014-01-01

    Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to −60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze—viabilities at 93.4%±7.4%, viable cell numbers at 14.3±1.7 million nuclei/mL alginate, and protein secretion at 10.5±1.7

  6. Cryogenic loading of large volume presses for high-pressure experimentation and synthesis of novel materials

    SciTech Connect

    Lipp, M J; Evans, W J; Yoo, C S

    2005-01-21

    We present an efficient easily implemented method for loading cryogenic fluids in a large volume press. We specifically apply this method to the high-pressure synthesis of an extended solid derived from CO using a Paris-Edinburgh cell. This method employs cryogenic cooling of Bridgman type WC anvils well insulated from other press components, condensation of the load gas within a brass annulus surrounding the gasket between the Bridgman anvils. We demonstrate the viability of the described approach by synthesizing macroscopic amounts (several milligrams) of polymeric CO-derived material, which were recovered to ambient conditions after compression of pure CO to 5 GPa or above.

  7. Large volume/high horsepower submersible pumping problems in water source wells

    SciTech Connect

    Hoestenbach, R.D.

    1981-01-01

    Various problems are encountered in, or compounded by, installing large volume/high horsepower submersible pumping equipment in water source wells, in the range of 30,000 to 90,000 bbl of water/day at 320 to 1020 hp. This study discusses the many problems that have appeared during the past 12 yr in Shell Oil Co's W. Texas water supply system and the solutions that were subsequently applied. The majority of these problems will be encountered in almost any project of this type. Specifically detailed are motor, pump, and protector anomalies, accessory equipment, surface production facilities, and the protective schemes utilized to optimize equipment life.

  8. Large-volume, high-horsepower submersible pumping problems in water source wells

    SciTech Connect

    Hoestenbach, R.D.

    1982-10-01

    Little has been written concerning problems that can be encountered in, or compounded by, installing large volume, high-horsepower submersible pumping equipment in water source wells in the range of 30,000 to 90,000 BWPD at 320 to 1,020 hp. This report addresses many problems of the past 12 years in Shell Oil Co.'s west Texas water supply system and the solutions that subsequently were applied. We feel that the majority of these problems are encountered in almost any project of this type. Motor, pump, and protector anomalies, accessory equipment, surface production facilities, and the protective schemes to optimize equipment life are discussed in detail.

  9. Thickness scalability of large volume cadmium zinc telluride high resolution radiation detectors

    NASA Astrophysics Data System (ADS)

    Awadalla, S. A.; Chen, H.; Mackenzie, J.; Lu, P.; Iniewski, K.; Marthandam, P.; Redden, R.; Bindley, G.; He, Z.; Zhang, F.

    2009-06-01

    This work focuses on the thickness scalability of traveling heater method (THM) grown CdZnTe crystals to produce large volume detectors with optimized spectroscopic performance. To meet this challenge, we have tuned both our THM growth process, to grow 75 mm diameter ingots, and our postgrowth annealing process. We have increased the thickness of our sliced wafers from 6 to 12 and 18 mm allowing the production of 10 and 15 mm thick detectors. As the detectors' thickness is scaled up, the energy resolution of both types, as pseudo-Frisch grid and pixelated monolithic detectors showed no degradation indicating improved materials uniformity and transport properties.

  10. Large-eddy simulation of pulverized coal swirl jet flame

    NASA Astrophysics Data System (ADS)

    Muto, Masaya; Watanabe, Hiroaki; Kurose, Ryoichi; Komori, Satoru; Balusamy, Saravanan; Hochgreb, Simone

    2013-11-01

    Coal is an important energy resource for future demand for electricity, as coal reserves are much more abundant than those of other fossil fuels. In pulverized coal fired power plants, it is very important to improve the technology for the control of environmental pollutants such as nitrogen oxide, sulfur oxide and ash particles including unburned carbon. In order to achieve these requirements, understanding the pulverized coal combustion mechanism is necessary. However, the combustion process of the pulverized coal is not well clarified so far since pulverized coal combustion is a complicated phenomenon in which the maximum flame temperature exceeds 1500 degrees Celsius and some substances which can hardly be measured, for example, radical species and highly reactive solid particles are included. Accordingly, development of new combustion furnaces and burners requires high cost and takes a long period. In this study, a large-eddy simulation (LES) is applied to a pulverized coal combustion field and the results will be compared with the experiment. The results show that present LES can capture the general feature of the pulverized coal swirl jet flame.

  11. Large-eddy simulations of contrails in a turbulent atmosphere

    NASA Astrophysics Data System (ADS)

    Picot, J.; Paoli, R.; Thouron, O.; Cariolle, D.

    2014-11-01

    In this work, the evolution of contrails in the vortex and dissipation regimes is studied by means of fully three-dimensional large-eddy simulation (LES) coupled to a Lagrangian particle tracking method to treat the ice phase. This is the first paper where fine-scale atmospheric turbulence is generated and sustained by means of a stochastic forcing that mimics the properties of stably stratified turbulent flows as those occurring in the upper troposphere lower stratosphere. The initial flow-field is composed by the turbulent background flow and a wake flow obtained from separate LES of the jet regime. Atmospheric turbulence is the main driver of the wake instability and the structure of the resulting wake is sensitive to the intensity of the perturbations, primarily in the vertical direction. A stronger turbulence accelerates the onset of the instability, which results in shorter contrail decent and more effective mixing in the interior of the plume. However, the self-induced turbulence that is produced in the wake after the vortex break-up dominates over background turbulence at the end of the vortex regime and dominates the mixing with ambient air. This results in global microphysical characteristics such as ice mass and optical depth that are be slightly affected by the intensity of atmospheric turbulence. On the other hand, the background humidity and temperature have a first order effect on the survival of ice crystals and particle size distribution, which is in line with recent and ongoing studies in the literature.

  12. A family of dynamic models for large-eddy simulation

    NASA Technical Reports Server (NTRS)

    Carati, D.; Jansen, K.; Lund, T.

    1995-01-01

    Since its first application, the dynamic procedure has been recognized as an effective means to compute rather than prescribe the unknown coefficients that appear in a subgrid-scale model for Large-Eddy Simulation (LES). The dynamic procedure is usually used to determine the nondimensional coefficient in the Smagorinsky (1963) model. In reality the procedure is quite general and it is not limited to the Smagorinsky model by any theoretical or practical constraints. The purpose of this note is to consider a generalized family of dynamic eddy viscosity models that do not necessarily rely on the local equilibrium assumption built into the Smagorinsky model. By invoking an inertial range assumption, it will be shown that the coefficients in the new models need not be nondimensional. This additional degree of freedom allows the use of models that are scaled on traditionally unknown quantities such as the dissipation rate. In certain cases, the dynamic models with dimensional coefficients are simpler to implement, and allow for a 30% reduction in the number of required filtering operations.

  13. Final Report: "Large-Eddy Simulation of Anisotropic MHD Turbulence"

    SciTech Connect

    Zikanov, Oleg

    2008-06-23

    To acquire better understanding of turbulence in flows of liquid metals and other electrically conducting fluids in the presence of steady magnetic fields and to develop an accurate and physically adequate LES (large-eddy simulation) model for such flows. The scientific objectives formulated in the project proposal have been fully completed. Several new directions were initiated and advanced in the course of work. Particular achievements include a detailed study of transformation of turbulence caused by the imposed magnetic field, development of an LES model that accurately reproduces this transformation, and solution of several fundamental questions of the interaction between the magnetic field and fluid flows. Eight papers have been published in respected peer-reviewed journals, with two more papers currently undergoing review, and one in preparation for submission. A post-doctoral researcher and a graduate student have been trained in the areas of MHD, turbulence research, and computational methods. Close collaboration ties have been established with the MHD research centers in Germany and Belgium.

  14. Saturn: A large area x-ray simulation accelerator

    SciTech Connect

    Bloomquist, D.D.; Stinnett, R.W.; McDaniel, D.H.; Lee, J.R.; Sharpe, A.W.; Halbleib, J.A.; Schlitt, L.G.; Spence, P.W.; Corcoran, P.

    1987-01-01

    Saturn is the result of a major metamorphosis of the Particle Beam Fusion Accelerator-I (PBFA-I) from an ICF research facility to the large-area x-ray source of the Simulation Technology Laboratory (STL) project. Renamed Saturn, for its unique multiple-ring diode design, the facility is designed to take advantage of the numerous advances in pulsed power technology made by the ICF program in recent years and much of the existing PBFA-I support system. Saturn will include significant upgrades in the energy storage and pulse-forming sections. The 36 magnetically insulated transmission lines (MITLs) that provided power flow to the ion diode of PBFA-I were replaced by a system of vertical triplate water transmission lines. These lines are connected to three horizontal triplate disks in a water convolute section. Power will flow through an insulator stack into radial MITLs that drive the three-ring diode. Saturn is designed to operate with a maximum of 750 kJ coupled to the three-ring e-beam diode with a peak power of 25 TW to provide an x-ray exposure capability of 5 x 10/sup 12/ rads/s (Si) and 5 cal/g (Au) over 500 cm/sup 2/.

  15. Large eddy simulation predictions of absolutely unstable round hot jet

    NASA Astrophysics Data System (ADS)

    Boguslawski, A.; Tyliszczak, A.; Wawrzak, K.

    2016-02-01

    The paper presents a novel view on the absolute instability phenomenon in heated variable density round jets. As known from literature the global instability mechanism in low density jets is released when the density ratio is lower than a certain critical value. The existence of the global modes was confirmed by an experimental evidence in both hot and air-helium jets. However, some differences in both globally unstable flows were observed concerning, among others, a level of the critical density ratio. The research is performed using the Large Eddy Simulation (LES) method with a high-order numerical code. An analysis of the LES results revealed that the inlet conditions for the velocity and density distributions at the nozzle exit influence significantly the critical density ratio and the global mode frequency. Two inlet velocity profiles were analyzed, i.e., the hyperbolic tangent and the Blasius profiles. It was shown that using the Blasius velocity profile and the uniform density distribution led to a significantly better agreement with the universal scaling law for global mode frequency.

  16. On the Computation of Sound by Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Streett, Craig L.; Sarkar, Sutanu

    1997-01-01

    The effect of the small scales on the source term in Lighthill's acoustic analogy is investigated, with the objective of determining the accuracy of large-eddy simulations when applied to studies of flow-generated sound. The distribution of the turbulent quadrupole is predicted accurately, if models that take into account the trace of the SGS stresses are used. Its spatial distribution is also correct, indicating that the low-wave-number (or frequency) part of the sound spectrum can be predicted well by LES. Filtering, however, removes the small-scale fluctuations that contribute significantly to the higher derivatives in space and time of Lighthill's stress tensor T(sub ij). The rms fluctuations of the filtered derivatives are substantially lower than those of the unfiltered quantities. The small scales, however, are not strongly correlated, and are not expected to contribute significantly to the far-field sound; separate modeling of the subgrid-scale density fluctuations might, however, be required in some configurations.

  17. Large-eddy simulations of unidirectional water flow over dunes

    NASA Astrophysics Data System (ADS)

    Grigoriadis, D. G. E.; Balaras, E.; Dimas, A. A.

    2009-06-01

    The unidirectional, subcritical flow over fixed dunes is studied numerically using large-eddy simulation, while the immersed boundary method is implemented to incorporate the bed geometry. Results are presented for a typical dune shape and two Reynolds numbers, Re = 17,500 and Re = 93,500, on the basis of bulk velocity and water depth. The numerical predictions of velocity statistics at the low Reynolds number are in very good agreement with available experimental data. A primary recirculation region develops downstream of the dune crest at both Reynolds numbers, while a secondary region develops at the toe of the dune crest only for the low Reynolds number. Downstream of the reattachment point, on the dune stoss, the turbulence intensity in the developing boundary layer is weaker than in comparable equilibrium boundary layers. Coherent vortical structures are identified using the fluctuating pressure field and the second invariant of the velocity gradient tensor. Vorticity is primarily generated at the dune crest in the form of spanwise "roller" structures. Roller structures dominate the flow dynamics near the crest, and are responsible for perturbing the boundary layer downstream of the reattachment point, which leads to the formation of "horseshoe" structures. Horseshoe structures dominate the near-wall dynamics after the reattachment point, do not rise to the free surface, and are distorted by the shear layer of the next crest. The occasional interaction between roller and horseshoe structures generates tube-like "kolk" structures, which rise to the free surface and persist for a long time before attenuating.

  18. Large-Scale Atomistic Simulations of Material Failure

    DOE Data Explorer

    Abraham, Farid [IBM Almaden Research; Duchaineau, Mark [LLNL; Wirth, Brian [LLNL; Heidelberg,; Seager, Mark [LLNL; De La Rubia, Diaz [LLNL

    These simulations from 2000 examine the supersonic propagation of cracks and the formation of complex junction structures in metals. Eight simulations concerning brittle fracture, ductile failure, and shockless compression are available.

  19. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    SciTech Connect

    Smith, William S.; Bull, Jeffrey S.; Wilcox, Trevor; Bos, Randall J.; Shao, Xuan-Min; Goorley, John T.; Costigan, Keeley R.

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.

  20. Simulation of preburner sprays, volumes 1 and 2

    NASA Technical Reports Server (NTRS)

    Hardalupas, Y.; Whitelaw, J. H.

    1993-01-01

    The present study considered characteristics of sprays under a variety of conditions. Control of these sprays is important as the spray details can control both rocket combustion stability and efficiency. Under the present study Imperial College considered the following: (1) Measurement of the size and rate of spread of the sprays produced by single coaxial airblast nozzles with axial gaseous stream. The local size, velocity, and flux characteristics for a wide range of gas and liquid flowrates were measured, and the results were correlated with the conditions of the spray at the nozzle exit. (2) Examination of the effect of the geometry of single coaxial airblast atomizers on spray characteristics. The gas and liquid tube diameters were varied over a range of values, the liquid tube recess was varied, and the shape of the exit of the gaseous jet was varied from straight to converging. (3) Quantification of the effect of swirl in the gaseous stream on the spray characteristics produced by single coaxial airblast nozzles. (4) Quantification of the effect of reatomization by impingement of the spray on a flat disc positioned around 200 mm from the nozzle exit. This models spray impingement on the turbopump dome during the startup process of the preburner of the SSME. (5) Study of the interaction between multiple sprays without and with swirl in their gaseous stream. The spray characteristics of single nozzles were compared with that of three identical nozzles with their axis at a small distance from each other. This study simulates the sprays in the preburner of the SSME, where there are around 260 elements on the faceplate of the combustion chamber. (6) Design an experimental facility to study the characteristics of sprays at high pressure conditions and at supercritical pressure and temperature for the gas but supercritical pressure and subcritical temperature for the liquid.

  1. A scale down process for the development of large volume cryopreservation.

    PubMed

    Kilbride, Peter; Morris, G John; Milne, Stuart; Fuller, Barry; Skepper, Jeremy; Selden, Clare

    2014-12-01

    The process of ice formation and propagation during cryopreservation impacts on the post-thaw outcome for a sample. Two processes, either network solidification or progressive solidification, can dominate the water-ice phase transition with network solidification typically present in small sample cryo-straws or cryo-vials. Progressive solidification is more often observed in larger volumes or environmental freezing. These different ice phase progressions could have a significant impact on cryopreservation in scale-up and larger volume cryo-banking protocols necessitating their study when considering cell therapy applications. This study determines the impact of these different processes on alginate encapsulated liver spheroids (ELS) as a model system during cryopreservation, and develops a method to replicate these differences in an economical manner. It was found in the current studies that progressive solidification resulted in fewer, but proportionally more viable cells 24h post-thaw compared with network solidification. The differences between the groups diminished at later time points post-thaw as cells recovered the ability to undertake cell division, with no statistically significant differences seen by either 48 h or 72 h in recovery cultures. Thus progressive solidification itself should not prove a significant hurdle in the search for successful cryopreservation in large volumes. However, some small but significant differences were noted in total viable cell recoveries and functional assessments between samples cooled with either progressive or network solidification, and these require further investigation. PMID:25219980

  2. A scale down process for the development of large volume cryopreservation☆

    PubMed Central

    Kilbride, Peter; Morris, G. John; Milne, Stuart; Fuller, Barry; Skepper, Jeremy; Selden, Clare

    2014-01-01

    The process of ice formation and propagation during cryopreservation impacts on the post-thaw outcome for a sample. Two processes, either network solidification or progressive solidification, can dominate the water–ice phase transition with network solidification typically present in small sample cryo-straws or cryo-vials. Progressive solidification is more often observed in larger volumes or environmental freezing. These different ice phase progressions could have a significant impact on cryopreservation in scale-up and larger volume cryo-banking protocols necessitating their study when considering cell therapy applications. This study determines the impact of these different processes on alginate encapsulated liver spheroids (ELS) as a model system during cryopreservation, and develops a method to replicate these differences in an economical manner. It was found in the current studies that progressive solidification resulted in fewer, but proportionally more viable cells 24 h post-thaw compared with network solidification. The differences between the groups diminished at later time points post-thaw as cells recovered the ability to undertake cell division, with no statistically significant differences seen by either 48 h or 72 h in recovery cultures. Thus progressive solidification itself should not prove a significant hurdle in the search for successful cryopreservation in large volumes. However, some small but significant differences were noted in total viable cell recoveries and functional assessments between samples cooled with either progressive or network solidification, and these require further investigation. PMID:25219980

  3. Estimation of flood volumes and simulation of flood hydrographs for ungaged small rural streams in Ohio

    USGS Publications Warehouse

    Sherwood, J.M.

    1993-01-01

    Methods are presented for estimating flood volumes and simulating flood hydrographs of rural streams in Ohio whose drainage areas are less than 6.5 square miles. The methods were developed to assist engineers in the design of hydraulic structures for which the temporary storage of water is a critical element of the design criteria. Examples of how to use the methods also are presented. Multiple-regression equations were developed to estimate maximum flood volumes of d-hour duration and T-year recurrence interval (dVT). Flood-volume data for all combinations of six durations (1, 2, 4, 8, 16, and 32 hours) and six recurrence intervals (2, 5, 10, 25, 50, and 100 years) were analyzed. The significant independent variables in the resulting 36 equations are drainage area, average annual precipitation, main-channel slope, and forested area. Standard errors of prediction for the 36 dVT equations range from +28 percent to +44 percent. A method is described for simulating flood hydrographs by applying a peak discharge and an estimated basin lagtime to a dimensionless hydrograph. Peak discharge may be estimated from equations in which drainage area, main-channel slope, and storage area are the significant explanatory variables, and average standard errors of prediction range from +33 to +41 percent. An equation is developed for estimating basin lagtime in which main-channel slope, forested area, and storage area are the significant explanatory variables, and the average standard error of prediction is +37 percent. A dimensionless hydrograph developed for use in Georgia was verified for use in Ohio. Step-by-step examples show how to (1) simulate flood hydrographs and compute their volumes, and (2) estimate volume-duration-frequency relations of small ungaged rural streams in Ohio. The volumes estimated by the two methods are compared. Both methods yield similar results for volume estimates of short duration, which are applicable to convective-type storm runoff. The volume

  4. First large volume characterization of the QIE10/11 custom front-end integrated circuits

    NASA Astrophysics Data System (ADS)

    Hare, D.; Baumbaugh, A.; Dal Monte, L.; Freeman, J.; Hirschauer, J.; Hughes, E.; Roy, T.; Whitbeck, A.; Yumiceva, F.; Zimmerman, T.

    2016-02-01

    The CMS experiment at the CERN Large Hadron Collider (LHC) will upgrade the photon detection and readout systems of its barrel and endcap hadron calorimeters (HCAL) through the second long shutdown of the LHC in 2018. A central feature of this upgrade is the development of two new versions of the QIE (Charge Integrator and Encoder), a Fermilab-designed custom ASIC for measurement of charge from detectors in high-rate environments. These most recent additions to the QIE family feature 17-bits of dynamic range with 1% digitization precision for high charge and a time-to-digital converter (TDC) with half nanosecond resolution all with 16 bits of readout per bunch crossing. For the first time, the CMS experiment has produced and characterized in great detail a large volume of chips. The characteristics and performance of the new QIE and their related chip-to-chip variations as measured in a sample of 10,000 chips is described.

  5. Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-07-12

    This work studies the performance and scalability characteristics of"hybrid'"parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  6. Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-06-14

    This work studies the performance and scalability characteristics of"hybrid" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  7. MPI-hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-03-20

    This work studies the performance and scalability characteristics of"hybrid'" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  8. Hybrid Parallelism for Volume Rendering on Large, Multi- and Many-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2011-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells. The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.

  9. Points based reconstruction and rendering of 3D shapes from large volume dataset

    NASA Astrophysics Data System (ADS)

    Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming

    2003-05-01

    In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.

  10. Multiple distal basin plains reveal a common distribution for large volume turbidity current recurrence intervals

    NASA Astrophysics Data System (ADS)

    Clare, M. A.; Talling, P. J.; Hunt, J.; Challenor, P. G.

    2013-12-01

    Remarkably large volume (>>1 km3) deposits emplaced by turbidity currents in distal basin plains result from large submarine landslides. Such landslides may generate tsunamis, and the turbidity currents pose threats to seafloor structures as well as being one of the most important processes for sediment transport across our planet. It is therefore important to understand the recurrence intervals and timing of landslides and the turbidity currents they generate. An understanding of their frequency provides information to assist in forward-looking geohazard analyses, including probabilistic modelling of potential damage. Analysis of their frequency distribution may also help to unravel links to triggering and conditioning mechanisms. We present long term records (up to 17 Ma) of landslide-triggered turbidity current recurrence intervals. We document the distribution of recurrence intervals for large volume turbidites in four basin-plains in disparate locations worldwide, including two recent systems and two outcrop studies. The recurrence times of turbidity currents is inferred from intervals of hemipelagic mud that form by fallout of background sediment between turbidity currents, and the average accumulation rate of hemipelagic mud between dated horizons. There is very little erosion below turbidite beds in the study locations; hence they represent an almost continuous sedimentary record. This method has the advantage of providing information on the timing of many different events from a small number of cores, with such large numbers (N> 100) of beds needed for robust statistical analysis. A common frequency distribution of turbidite recurrence intervals is observed, despite their variable ages and disparate locations, suggesting similar underlying controls on triggering mechanism and frequency. This common distribution closely approximates a temporally-random Poisson distribution, such that the probability of an event occurring along the basin margin is

  11. Large-eddy simulation of unidirectional turbulent flow over dunes

    NASA Astrophysics Data System (ADS)

    Omidyeganeh, Mohammad

    We performed large eddy simulation of the flow over a series of two- and three-dimensional dune geometries at laboratory scale using the Lagrangian dynamic eddy-viscosity subgrid-scale model. First, we studied the flow over a standard 2D transverse dune geometry, then bedform three-dimensionality was imposed. Finally, we investigated the turbulent flow over barchan dunes. The results are validated by comparison with simulations and experiments for the 2D dune case, while the results of the 3D dunes are validated qualitatively against experiments. The flow over transverse dunes separates at the dune crest, generating a shear layer that plays a crucial role in the transport of momentum and energy, as well as the generation of coherent structures. Spanwise vortices are generated in the separated shear; as they are advected, they undergo lateral instabilities and develop into horseshoe-like structures and finally reach the surface. The ejection that occurs between the legs of the vortex creates the upwelling and downdrafting events on the free surface known as "boils". The three-dimensional separation of flow at the crestline alters the distribution of wall pressure, which may cause secondary flow across the stream. The mean flow is characterized by a pair of counter-rotating streamwise vortices, with core radii of the order of the flow depth. Staggering the crestlines alters the secondary motion; two pairs of streamwise vortices appear (a strong one, centred about the lobe, and a weaker one, coming from the previous dune, centred around the saddle). The flow over barchan dunes presents significant differences to that over transverse dunes. The flow near the bed, upstream of the dune, diverges from the centerline plane; the flow close to the centerline plane separates at the crest and reattaches on the bed. Away from the centerline plane and along the horns, flow separation occurs intermittently. The flow in the separation bubble is routed towards the horns and leaves

  12. New Specimen Access Device for the Large Space Simulator

    NASA Astrophysics Data System (ADS)

    Lazzarini, P.; Ratti, F.

    2004-08-01

    The Large Space Simulator (LSS) is used to simulate in- orbit environmental conditions for spacecraft (S/C) testing. The LSS is intended to be a flexible facility: it can accommodate test articles that can differ significantly in shape and weight and carry various instruments. To improve the accessibility to the S/C inside the LSS chamber a new Specimen Access Device (SAD) has been procured. The SAD provides immediate and easy access to the S/C, thus reducing the amount of time necessary for the installations of set-ups in the LSS. The SAD has been designed as bridge crane carrying a basket to move the operator into the LSS. Such a crane moves on parallel rails on the top floor of the LSS building. The SAD is composed by three subsystems: the main bridge, the trolley that moves along the main bridge and the telescopic mast. A trade off analysis has been carried out for what concerns the telescopic mast design. The choice between friction pads vs rollers, to couple the different sections of the mast, has been evaluated. The resulting design makes use of a four sections square mast, with rollers driven deployment. This design has been chosen for the higher stiffness of the mast, due to the limited number of sections, and because it reduces radically the risk of contamination related to a solution based on sliding bushings. Analyses have been performed to assess the mechanical behaviour both in static and in dynamic conditions. In particular the telescopic mast has been studied in detail to optimise its stiffness and to check the safety margins in the various operational conditions. To increase the safety of the operations an anticollision system has been implemented by positioning on the basket two kind of sensors, ultrasonic and contact ones. All the translations are regulated by inverters with acceleration and deceleration ramps controlled by a Programmable Logic Controller (PLC). An absolute encoder is installed on each motor to provide the actual position of the

  13. Improved engine wall models for Large Eddy Simulation (LES)

    NASA Astrophysics Data System (ADS)

    Plengsaard, Chalearmpol

    Improved wall models for Large Eddy Simulation (LES) are presented in this research. The classical Werner-Wengle (WW) wall shear stress model is used along with near-wall sub-grid scale viscosity. A sub-grid scale turbulent kinetic energy is employed in a model for the eddy viscosity. To gain better heat flux results, a modified classical variable-density wall heat transfer model is also used. Because no experimental wall shear stress results are available in engines, the fully turbulent developed flow in a square duct is chosen to validate the new wall models. The model constants in the new wall models are set to 0.01 and 0.8, respectively and are kept constant throughout the investigation. The resulting time- and spatially-averaged velocity and temperature wall functions from the new wall models match well with the law-of-the-wall experimental data at Re = 50,000. In order to study the effect of hot air impinging walls, jet impingement on a flat plate is also tested with the new wall models. The jet Reynolds number is equal to 21,000 and a fixed jet-to-plate spacing of H/D = 2.0. As predicted by the new wall models, the time-averaged skin friction coefficient agrees well with experimental data, while the computed Nusselt number agrees fairly well when r/D > 2.0. Additionally, the model is validated using experimental data from a Caterpillar engine operated with conventional diesel combustion. Sixteen different operating engine conditions are simulated. The majority of the predicted heat flux results from each thermocouple location follow similar trends when compared with experimental data. The magnitude of peak heat fluxes as predicted by the new wall models is in the range of typical measured values in diesel combustion, while most heat flux results from previous LES wall models are over-predicted. The new wall models generate more accurate predictions and agree better with experimental data.

  14. Pathways of deep cyclones associated with large volume changes (LVCs) and major Baltic inflows (MBIs)

    NASA Astrophysics Data System (ADS)

    Lehmann, Andreas; Höflich, Katharina; Post, Piia; Myrberg, Kai

    2016-04-01

    Large volume changes (LVCs) and major Baltic inflows (MBIs) are essential processes for the water exchange and renewal of the deep stagnant deep water in the Baltic Sea deep basins. MBIs are considered as subset of LVCs transporting with the large water volume a big amount of highly saline and oxygenated water into the Baltic Sea. Since the early 1980s the frequency of MBIs has dropped drastically from 5 to 7 events to only one inflow per decade, and long lasting periods without MBIs became the usual state. Only in January 1993, 2003 and December 2014 MBIs occurred that were able to interrupt the stagnation periods in the deep basins of the Baltic Sea. However, in spite of the decreasing frequency of MBIs, there is no obvious decrease of LVCs. Large volume changes have been calculated for the period 1887-2014 filtering daily time series of Landsort sea surface elevation anomalies. The Landsort sea level is known to reflect the mean sea level of the Baltic Sea very well. Thus, LVCs can be calculated from the mean sea level variations. The cases with local minimum and maximum difference resulting of at least 100 km³ of water volume change have been chosen for a closer study of characteristic pathways of deep cyclones. The average duration of a LVC is about 40 days. During this time, 5-6 deep cyclones will move along characteristic storm tracks. We obtained three main routes of deep cyclones which were associated with LVCs, but also with the climatology. One is approaching from the west at about 58-62°N, passing the northern North Sea, Oslo, Sweden and the Island of Gotland, while a second, less frequent one, is approaching from the west at about 65°N, crossing Scandinavia south-eastwards passing the Sea of Bothnia and entering Finland. A third very frequent one is entering the study area north of Scotland turning north-eastwards along the northern coast of Scandinavia. Thus, the conditions for a LVC to happen are a temporal clustering of deep cyclones in certain

  15. Mechanically Cooled Large-Volume Germanium Detector Systems for Nuclear Explosion Monitoring

    SciTech Connect

    Hull, Ethan L.; Pehl, Richard H.; Lathrop, James R.; Martin, Gregory N.; Mashburn, R. B.; Miley, Harry S.; Aalseth, Craig E.; Hossbach, Todd W.; Bowyer, Ted W.

    2006-09-21

    Compact maintenance free mechanical cooling systems are being developed to operate large volume (~570 cm3, ~3 kg, 140% or larger) germanium detectors for field applications. We are using a new generation of Stirling-cycle mechanical coolers for operating the very largest volume germanium detectors with absolutely no maintenance or liquid nitrogen requirements. The user will be able to leave these systems unplugged on the shelf until needed. The flip of a switch will bring a system to life in ~1 hour for measurements. The maintenance-free operating lifetime of these detector systems will exceed five years. These features are necessary for remote long-duration liquid-nitrogen free deployment of large-volume germanium gamma-ray detector systems for Nuclear Explosion Monitoring (NEM). The Radionuclide Aerosol Sampler/Analyzer (RASA) will greatly benefit from the availability of such detectors by eliminating the need for liquid nitrogen at RASA sites while still allowing the very largest available germanium detectors to be utilized. These mechanically cooled germanium detector systems being developed here will provide the largest, most sensitive detectors possible for use with the RASA. To provide such systems, the appropriate technical fundamentals are being researched. Mechanical cooling of germanium detectors has historically been a difficult endeavor. The success or failure of mechanically cooled germanium detectors stems from three main technical issues: temperature, vacuum, and vibration. These factors affect one another. There is a particularly crucial relationship between vacuum and temperature. These factors will be experimentally studied both separately and together to insure a solid understanding of the physical limitations each factor places on a practical mechanically cooled germanium detector system for field use. Using this knowledge, a series of mechanically cooled germanium detector prototype systems are being designed and fabricated. Our collaborators

  16. Large-volume sample stacking for analysis of ethylenediaminetetraacetic acid by capillary electrophoresis.

    PubMed

    Zhu, Zhiwei; Zhang, Lifeng; Marimuthu, Arun; Yang, Zhaoguang

    2002-09-01

    A simple, quick, and sensitive capillary electrophoretic technique-large volume stacking using the electroosmotic flow (EOF) pump (LVSEP) - has been developed for determining ethylenediaminetetraacetic acid (EDTA) in drinking water for the first time. It is based on a precapillary complexation of EDTA with Fe(III) ions, followed by large-volume sample stacking and direct UV detection at 258 nm. The curve of peak response versus concentration was linear from 5.0 to 600.0 microg/L, and 0.7 to 30.0 mg/L. The regression coefficients were 0.9988 and 0.9990, respectively. The detection limit of the current technique for EDTA analysis was 0.2 microg/L with an additional 10-fold preconcentration procedure, based on the signal-to-noise ratio of 3. As opposed to the classical capillary zone electrophoresis (CE) method, the detection limit was improved about 1000-fold by using this LVSEP method. To the best of our knowledge, it represents the highest sensitivity for EDTA analysis via CE. Several drinking water samples were tested by this novel method with satisfactory results. PMID:12207295

  17. Controlled ice nucleation--Is it really needed for large-volume sperm cryopreservation?

    PubMed

    Saragusty, Joseph; Osmers, Jan-Hendrik; Hildebrandt, Thomas Bernd

    2016-04-15

    Controlled ice nucleation (CIN) is an integral stage of slow freezing process when relatively large volumes (usually 1 mL or larger) of biological samples in suspension are involved. Without it, a sample will supercool to way below its melting point before ice crystals start forming, resulting in multiple damaging processes. In this study, we tested the hypothesis that when freezing large volumes by the directional freezing technique, a CIN stage is not needed. Semen samples collected from ten bulls were frozen in 2.5-mL HollowTubes in a split-sample manner with and without a CIN stage. Thawed samples were evaluated for viability, acrosome integrity, rate of normal morphology, and, using computer-aided sperm analysis system, for a wide range of motility parameters that were also evaluated after 3 hours of incubation at 37 °C. Analysis of the results found no difference between freezing with and without CIN stage in any and all of the 29 parameters compared (P > 0.1 for all). This similarity was maintained through 3 hours of incubation at 37 °C. Possibly, because of its structure, the directional freezing device promotes continuous ice nucleation so a specific CIN stage is no longer needed, thus reducing costs, energy use, and carbon footprint. PMID:26806291

  18. Broadband frequency ECR ion source concepts with large resonant plasma volumes

    SciTech Connect

    Alton, G.D.

    1995-12-31

    New techniques are proposed for enhancing the performances of ECR ion sources. The techniques are based on the use of high-power, variable-frequency, multiple-discrete-frequency, or broadband microwave radiation, derived from standard TWT technology, to effect large resonant ``volume`` ECR sources. The creation of a large ECR plasma ``volume`` permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present forms of the ECR ion source. If successful, these developments could significantly impact future accelerator designs and accelerator-based, heavy-ion-research programs by providing multiply-charged ion beams with the energies and intensities required for nuclear physics research from existing ECR ion sources. The methods described in this article can be used to retrofit any ECR ion source predicated on B-minimum plasma confinement techniques.

  19. Generation of Diffuse Large Volume Plasma by an Ionization Wave from a Plasma Jet

    NASA Astrophysics Data System (ADS)

    Laroussi, Mounir; Razavi, Hamid

    2015-09-01

    Low temperature plasma jets emitted in ambient air are the product of fast ionization waves that are guided within a channel of a gas flow, such as helium. This guided ionization wave can be transmitted through a dielectric material and under some conditions can ignite a discharge behind the dielectric material. Here we present a novel way to produce large volume diffuse low pressure plasma inside a Pyrex chamber that does not have any electrodes or electrical energy directly applied to it. The diffuse plasma is ignited inside the chamber by a plasma jet located externally to the chamber and that is physically and electrically unconnected to the chamber. Instead, the plasma jet is just brought in close proximity to the external wall/surface of the chamber or to a dielectric tubing connected to the chamber. The plasma thus generated is diffuse, large volume and with physical and chemical characteristics that are different than the external plasma jet that ignited it. So by using a plasma jet we are able to ``remotely'' ignite volumetric plasma under controlled conditions. This novel method of ``remote'' generation of a low pressure, low temperature diffuse plasma can be useful for various applications including material processing and biomedicine.

  20. Colloids Versus Albumin in Large Volume Paracentesis to Prevent Circulatory Dysfunction: Evidence-based Case Report.

    PubMed

    Widjaja, Felix F; Khairan, Paramita; Kamelia, Telly; Hasan, Irsan

    2016-04-01

    Large volume paracentesis may cause paracentesis induced circulatory dysfunction (PICD). Albumin is recommended to prevent this abnormality. Meanwhile, the price of albumin is too expensive and there should be another alternative that may prevent PICD. This report aimed to compare albumin to colloids in preventing PICD. Search strategy was done using PubMed, Scopus, Proquest, dan Academic Health Complete from EBSCO with keywords of "ascites", "albumin", "colloid", "dextran", "hydroxyethyl starch", "gelatin", and "paracentesis induced circulatory dysfunction". Articles was limited to randomized clinical trial and meta-analysis with clinical question of "In hepatic cirrhotic patient undergone large volume paracentesis, whether colloids were similar to albumin to prevent PICD". We found one meta-analysis and four randomized clinical trials (RCT). A meta analysis showed that albumin was still superior of which odds ratio 0.34 (0.23-0.51). Three RCTs showed the same results and one RCT showed albumin was not superior than colloids. We conclude that colloids could not constitute albumin to prevent PICD, but colloids still have a role in patient who undergone paracentesis less than five liters. PMID:27550886

  1. Diethylaminoethyl-cellulose clean-up of a large volume naphthenic acid extract.

    PubMed

    Frank, Richard A; Kavanagh, Richard; Burnison, B Kent; Headley, John V; Peru, Kerry M; Der Kraak, Glen Van; Solomon, Keith R

    2006-08-01

    The Athabasca oil sands of Alberta, Canada contain an estimated 174 billion barrels of bitumen. During oil sands refining processes, an extraction tailings mixture is produced that has been reported as toxic to aquatic organisms and is therefore collected in settling ponds on site. Investigation into the toxicity of these tailings pond waters has identified naphthenic acids (NAs) and their sodium salts as the major toxic components, and a multi-year study has been initiated to identify the principal toxic components within NA mixtures. Future toxicity studies require a large volume of a NA mixture, however, a well-defined bulk extraction technique is not available. This study investigated the use of a weak anion exchanger, diethylaminoethyl-cellulose (DEAE-cellulose), to remove humic-like material present after collecting the organic acid fraction of oil sands tailings pond water. The NA extraction and clean-up procedure proved to be a fast and efficient method to process large volumes of tailings pond water, providing an extraction efficiency of 41.2%. The resulting concentrated NA solution had a composition that differed somewhat from oil sands fresh tailings, with a reduction in the abundance of lower molecular weight NAs being the most significant difference. This reduction was mainly due to the initial acidification of tailings pond water. The DEAE-cellulose treatment had only a minor effect on the NA concentration, no noticeable effect on the NA fingerprint, and no significant effect on the mixture toxicity towards Vibrio fischeri. PMID:16469358

  2. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.; Vidoni, T. J.

    1991-01-01

    The main objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. In the efforts related to LES, we were concerned with developing reliable subgrid closures for modeling of the fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we focused our attention to further investigation of the effects of exothermicity in compressible turbulent flows. In our previous work, in the first year of this research, we have considered only 'simple' flows. Currently, we are in the process of extending our analyses for the purpose of modeling more practical flows of current interest at LaRC. A summary of our accomplishments during the third six months of the research is presented.

  3. GCM Simulation of the Large-Scale North American Monsoon Including Water Vapor Tracer Diagnostics

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Walker, Gregory; Schubert, Siegfried D.; Sud, Yogesh; Atlas, Robert M. (Technical Monitor)

    2002-01-01

    The geographic sources of water for the large scale North American monsoon in a GCM (General Circulation Model) are diagnosed using passive constituent tracers of regional water sources (Water Vapor Tracers, WVT). The NASA Data Assimilation Office Finite Volume (FV) GCM was used to produce a 10-year simulation (1984 through 1993) including observed sea surface temperature. Regional and global WVT sources were defined to delineate the surface origin of water for precipitation in and around the North American Monsoon. The evolution of the mean annual cycle and the interannual variations of the monsoonal circulation will be discussed. Of special concern are the relative contributions of the local source (precipitation recycling) and remote sources of water vapor to the annual cycle and the interannual variation of monsoonal precipitation. The relationships between soil water, surface evaporation, precipitation and precipitation recycling will be evaluated.

  4. GCM Simulation of the Large-scale North American Monsoon Including Water Vapor Tracer Diagnostics

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Walker, Gregory; Schubert, Siegfried D.; Sud, Yogesh; Atlas, Robert M. (Technical Monitor)

    2001-01-01

    The geographic sources of water for the large-scale North American monsoon in a GCM are diagnosed using passive constituent tracers of regional water'sources (Water Vapor Tracers, WVT). The NASA Data Assimilation Office Finite Volume (FV) GCM was used to produce a 10-year simulation (1984 through 1993) including observed sea surface temperature. Regional and global WVT sources were defined to delineate the surface origin of water for precipitation in and around the North American i'vionsoon. The evolution of the mean annual cycle and the interannual variations of the monsoonal circulation will be discussed. Of special concern are the relative contributions of the local source (precipitation recycling) and remote sources of water vapor to the annual cycle and the interannual variation of warm season precipitation. The relationships between soil water, surface evaporation, precipitation and precipitation recycling will be evaluated.

  5. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  6. A scalable messaging system for accelerating discovery from large scale scientific simulations

    SciTech Connect

    Jin, Tong; Zhang, Fan; Parashar, Manish; Klasky, Scott A; Podhorszki, Norbert; Abbasi, Hasan

    2012-01-01

    Emerging scientific and engineering simulations running at scale on leadership-class High End Computing (HEC) environments are producing large volumes of data, which has to be transported and analyzed before any insights can result from these simulations. The complexity and cost (in terms of time and energy) associated with managing and analyzing this data have become significant challenges, and are limiting the impact of these simulations. Recently, data-staging approaches along with in-situ and in-transit analytics have been proposed to address these challenges by offloading I/O and/or moving data processing closer to the data. However, scientists continue to be overwhelmed by the large data volumes and data rates. In this paper we address this latter challenge. Specifically, we propose a highly scalable and low-overhead associative messaging framework that runs on the data staging resources within the HEC platform, and builds on the staging-based online in-situ/in- transit analytics to provide publish/subscribe/notification-type messaging patterns to the scientist. Rather than having to ingest and inspect the data volumes, this messaging system allows scientists to (1) dynamically subscribe to data events of interest, e.g., simple data values or a complex function or simple reduction (max()/min()/avg()) of the data values in a certain region of the application domain is greater/less than a threshold value, or certain spatial/temporal data features or data patterns are detected; (2) define customized in-situ/in-transit actions that are triggered based on the events, such as data visualization or transformation; and (3) get notified when these events occur. The key contribution of this paper is a design and implementation that can support such a messaging abstraction at scale on high- end computing (HEC) systems with minimal overheads. We have implemented and deployed the messaging system on the Jaguar Cray XK6 machines at Oak Ridge National Laboratory and the

  7. Enrichment of diluted cell populations from large sample volumes using 3D carbon-electrode dielectrophoresis.

    PubMed

    Islam, Monsur; Natu, Rucha; Larraga-Martinez, Maria Fernanda; Martinez-Duarte, Rodrigo

    2016-05-01

    Here, we report on an enrichment protocol using carbon electrode dielectrophoresis to isolate and purify a targeted cell population from sample volumes up to 4 ml. We aim at trapping, washing, and recovering an enriched cell fraction that will facilitate downstream analysis. We used an increasingly diluted sample of yeast, 10(6)-10(2) cells/ml, to demonstrate the isolation and enrichment of few cells at increasing flow rates. A maximum average enrichment of 154.2 ± 23.7 times was achieved when the sample flow rate was 10 μl/min and yeast cells were suspended in low electrically conductive media that maximizes dielectrophoresis trapping. A COMSOL Multiphysics model allowed for the comparison between experimental and simulation results. Discussion is conducted on the discrepancies between such results and how the model can be further improved. PMID:27375816

  8. Large volume susy breaking with a solution to the decompactification problem

    NASA Astrophysics Data System (ADS)

    Faraggi, Alon E.; Kounnas, Costas; Partouche, Hervé

    2015-10-01

    We study heterotic ground states in which supersymmetry is broken by coupling the momentum and winding charges of two large extra dimensions to the R-charges of the supersymmetry generators. The large dimensions give rise to towers of heavy string thresholds that contribute to the running of the gauge couplings. In the general case, these contributions are proportional to the volume of the two large dimensions and invalidate the perturbative string expansion. The problem is evaded if the susy breaking sectors arise as a spontaneously broken phase of N = 4 → N = 2 → N = 0 supersymmetry, provided that N = 4 supersymmetry is restored on the boundary of the moduli space. We discuss the mechanism in the case of Z2 ×Z2 orbifolds, which requires that the twisted sector that contains the large extra dimensions has no fixed points. We analyze the full string partition function and show that the twisted sectors distribute themselves in non-aligned N = 2 orbits, hence preserving the solution to the string decompactification problem. Remarkably, we find that the contribution to the vacuum energy from the N = 2 → N = 0 sectors is suppressed, and the only substantial contribution arises from the breaking of the N = 4 sector to N = 0.

  9. Large-eddy simulation coupled to mesoscale meteorological model for gas dispersion in an urban district

    NASA Astrophysics Data System (ADS)

    Michioka, T.; Sato, A.; Sada, K.

    2013-08-01

    A microscale large-eddy simulation (LES) model coupled to a mesoscale LES model is implemented to estimate a ground concentration considering the meteorological influence in an actual urban district. The microscale LES model is based on a finite volume method with an unstructured grid system to resolve the flow structure in a complex geometry. The Advanced Regional Prediction System (ARPS) is used for mesoscale meteorological simulation. To evaluate the performance of the LES model, 1-h averaged concentrations are compared with those obtained by field measurements, which were conducted for tracer gas dispersion from a point source on the roof of a tall building in Tokyo. The concentrations obtained by the LES model without combing the mesoscale LES model are in quite good agreement with the wind-tunnel experimental data, but overestimates the 1 h averaged ground concentration in the field measurements. On the other hand, the ground concentrations using the microscale LES model coupled to the mesoscale LES are widely distributed owing to large-scale turbulent motions generated by the mesoscale LES, and the concentrations are nearly equal to the concentrations from the field measurements.

  10. Large-eddy simulation of bubble-driven plume in stably stratified flow.

    NASA Astrophysics Data System (ADS)

    Yang, Di; Chen, Bicheng; Socolofsky, Scott; Chamecki, Marcelo; Meneveau, Charles

    2015-11-01

    The interaction between a bubble-driven plume and stratified water column plays a vital role in many environmental and engineering applications. As the bubbles are released from a localized source, they induce a positive buoyancy flux that generates an upward plume. As the plume rises, it entrains ambient water, and when the plume rises to a higher elevation where the stratification-induced negative buoyancy is sufficient, a considerable fraction of the entrained fluid detrains, or peels, to form a downward outer plume and a lateral intrusion layer. In the case of multiphase plumes, the intrusion layer may also trap weakly buoyant particles (e.g., oil droplets in the case of a subsea accidental blowout). In this study, the complex plume dynamics is studied using large-eddy simulation (LES), with the flow field simulated by hybrid pseudospectral/finite-difference scheme, and the bubble and dye concentration fields simulated by finite-volume scheme. The spatial and temporal characteristics of the buoyant plume are studied, with a focus on the effects of different bubble buoyancy levels. The LES data provide useful mean plume statistics for evaluating the accuracy of 1-D engineering models for entrainment and peeling fluxes. Based on the insights learned from the LES, a new continuous peeling model is developed and tested. Study supported by the Gulf of Mexico Research Initiative (GoMRI).

  11. Challenges to large-scale simulations of permafrost freeze-thaw dynamics

    NASA Astrophysics Data System (ADS)

    Collier, N.; Bisht, G.; Kumar, J.

    2014-12-01

    In an effort to model the dymanics of the permafrost freeze and thaw process in the Alaskan tundra, we have implemented a finite volume method which approximates the evolution of a coupled surface/subsurface mass and energy balance within PFLOTRAN--an open source, state-of-the-art massively parallel subsurface flow and reactive transport code. While this system is studied in the literature at one scale, we encounter many undocumented pitfalls as we exercise the model at high resolution and force using realistic datasets from the field sites. These realistic simulations for field sites near Barrow, Alaska expose the model to a wide range of moisture and thermal states that are not tested in published studies. For example, the conventional upwinding of the relative permeability used in the Darcy flux computation can yield a flow into a frozen cell. We also find that infiltration, sources, and sinks must be carefully regulated as flow into frozen portions of the domain, or out of dry or frozen regions can cause unphysical states in the simulation which cause failure. Many straight-forward solutions are not smooth which produce discontinuities in the Jacobian of the nonlinear residual. These difficulties represent a current hurdle to running large-scale permafrost dynamics simulations. We describe these challenges and present approaches to overcoming them in the pursuit of a scalable scheme.

  12. The Oligocene Lund Tuff, Great Basin, USA: a very large volume monotonous intermediate

    NASA Astrophysics Data System (ADS)

    Maughan, Larissa L.; Christiansen, Eric H.; Best, Myron G.; Grommé, C. Sherman; Deino, Alan L.; Tingey, David G.

    2002-03-01

    Unusual monotonous intermediate ignimbrites consist of phenocryst-rich dacite that occurs as very large volume (>1000 km 3) deposits that lack systematic compositional zonation, comagmatic rhyolite precursors, and underlying plinian beds. They are distinct from countless, usually smaller volume, zoned rhyolite-dacite-andesite deposits that are conventionally believed to have erupted from magma chambers in which thermal and compositional gradients were established because of sidewall crystallization and associated convective fractionation. Despite their great volume, or because of it, monotonous intermediates have received little attention. Documentation of the stratigraphy, composition, and geologic setting of the Lund Tuff - one of four monotonous intermediate tuffs in the middle-Tertiary Great Basin ignimbrite province - provides insight into its unusual origin and, by implication, the origin of other similar monotonous intermediates. The Lund Tuff is a single cooling unit with normal magnetic polarity whose volume likely exceeded 3000 km 3. It was emplaced 29.02±0.04 Ma in and around the coeval White Rock caldera which has an unextended north-south diameter of about 50 km. The tuff is monotonous in that its phenocryst assemblage is virtually uniform throughout the deposit: plagioclase>quartz≈hornblende>biotite>Fe-Ti oxides≈sanidine>titanite, zircon, and apatite. However, ratios of phenocrysts vary by as much as an order of magnitude in a manner consistent with progressive crystallization in the pre-eruption chamber. A significant range in whole-rock chemical composition (e.g., 63-71 wt% SiO 2) is poorly correlated with phenocryst abundance. These compositional attributes cannot have been caused wholly by winnowing of glass from phenocrysts during eruption, as has been suggested for the monotonous intermediate Fish Canyon Tuff. Pumice fragments are also crystal-rich, and chemically and mineralogically indistinguishable from bulk tuff. We postulate that

  13. M.E.T.R.O.-Apex Gaming Simulation, Volume 28 (OS/360 Version).

    ERIC Educational Resources Information Center

    Michigan Univ., Ann Arbor. Environmental Simulation Lab.

    Operator's instructions and technical support materials needed for processing the M.E.T.R.O.-APEX (Air Pollution Exercise) game decisions on an IBM 360 computer are compiled in this volume. M.E.T.R.O.-APEX is a computerized college and professional level "real world" simulation of a community with urban and rural problems, industrial activities,…

  14. IAQPC: INDOOR AIR QUALITY SIMULATOR FOR PERSONAL COMPUTERS: VOLUME 1. TECHNICAL MANUAL

    EPA Science Inventory

    The two-volume report describes the development of an indoor air quality simulator for personal computers (IAQPC), a program that addresses the problems of indoor air contamination. The program-- systematic, user-friendly, and computer-based--can be used by administrators and eng...

  15. IAQPC: INDOOR AIR QUALITY SIMULATOR FOR PERSONAL COMPUTERS: VOLUME 2. USER'S GUIDE

    EPA Science Inventory

    The two-volume report describes the development of an indoor air quality simulator for personal computers (IAQPC), a program that addresses the problems of indoor air contamination. The program-- systematic, user-friendly, and computer-based--can be used by administrators and eng...

  16. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.

    1993-01-01

    The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.

  17. Large volume recycling of oceanic lithosphere over short time scales: geochemical constraints from the Caribbean Large Igneous Province

    NASA Astrophysics Data System (ADS)

    Hauff, F.; Hoernle, K.; Tilton, G.; Graham, D. W.; Kerr, A. C.

    2000-01-01

    isochron diagrams suggests that the age of separation of enriched and depleted components from the depleted MORB source mantle could have been ≤500 Ma before CLIP formation and interpreted to reflect the recycling time of the CLIP source. Mantle plume heads may provide a mechanism for transporting large volumes of possibly young recycled oceanic lithosphere residing in the lower mantle back into the shallow MORB source mantle.

  18. Calcium Isolation from Large-Volume Human Urine Samples for 41Ca Analysis by Accelerator Mass Spectrometry

    PubMed Central

    Miller, James J; Hui, Susanta K; Jackson, George S; Clark, Sara P; Einstein, Jane; Weaver, Connie M; Bhattacharyya, Maryka H

    2013-01-01

    Calcium oxalate precipitation is the first step in preparation of biological samples for 41Ca analysis by accelerator mass spectrometry. A simplified protocol for large-volume human urine samples was characterized, with statistically significant increases in ion current and decreases in interference. This large-volume assay minimizes cost and effort and maximizes time after 41Ca administration during which human samples, collected over a lifetime, provide 41Ca:Ca ratios that are significantly above background. PMID:23672965

  19. Efficient Voronoi volume estimation for DEM simulations of granular materials under confined conditions

    PubMed Central

    Frenning, Göran

    2015-01-01

    When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975

  20. Large volume serial section tomography by Xe Plasma FIB dual beam microscopy.

    PubMed

    Burnett, T L; Kelley, R; Winiarski, B; Contreras, L; Daly, M; Gholinia, A; Burke, M G; Withers, P J

    2016-02-01

    Ga(+) Focused Ion Beam-Scanning Electron Microscopes (FIB-SEM) have revolutionised the level of microstructural information that can be recovered in 3D by block face serial section tomography (SST), as well as enabling the site-specific removal of smaller regions for subsequent transmission electron microscope (TEM) examination. However, Ga(+) FIB material removal rates limit the volumes and depths that can be probed to dimensions in the tens of microns range. Emerging Xe(+) Plasma Focused Ion Beam-Scanning Electron Microscope (PFIB-SEM) systems promise faster removal rates. Here we examine the potential of the method for large volume serial section tomography as applied to bainitic steel and WC-Co hard metals. Our studies demonstrate that with careful control of milling parameters precise automated serial sectioning can be achieved with low levels of milling artefacts at removal rates some 60× faster. Volumes that are hundreds of microns in dimension have been collected using fully automated SST routines in feasible timescales (<24h) showing good grain orientation contrast and capturing microstructural features at the tens of nanometres to the tens of microns scale. Accompanying electron back scattered diffraction (EBSD) maps show high indexing rates suggesting low levels of surface damage. Further, under high current Ga(+) FIB milling WC-Co is prone to amorphisation of WC surface layers and phase transformation of the Co phase, neither of which have been observed at PFIB currents as high as 60nA at 30kV. Xe(+) PFIB dual beam microscopes promise to radically extend our capability for 3D tomography, 3D EDX, 3D EBSD as well as correlative tomography. PMID:26683814

  1. Large liquid rocket engine transient performance simulation system

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Southwick, R. D.

    1991-01-01

    A simulation system, ROCETS, was designed and developed to allow cost-effective computer predictions of liquid rocket engine transient performance. The system allows a user to generate a simulation of any rocket engine configuration using component modules stored in a library through high-level input commands. The system library currently contains 24 component modules, 57 sub-modules and maps, and 33 system routines and utilities. FORTRAN models from other sources can be operated in the system upon inclusion of interface information on comment cards. Operation of the simulation is simplified for the user by run, execution, and output processors. The simulation system makes available steady-state trim balance, transient operation, and linear partial generation. The system utilizes a modern equation solver for efficient operation of the simulations. Transient integration methods include integral and differential forms for the trapezoidal, first order Gear, and second order Gear corrector equations. A detailed technology test bed engine (TTBE) model was generated to be used as the acceptance test of the simulation system. The general level of model detail was that reflected in the Space Shuttle Main Engine DTM. The model successfully obtained steady-state balance in main stage operation and simulated throttle transients, including engine starts and shutdown. A NASA FORTRAN control model was obtained, ROCETS interface installed in comment cards, and operated with the TTBE model in closed-loop transient mode.

  2. Cenozoic ice volume and temperature simulations with a 1-D ice-sheet model

    NASA Astrophysics Data System (ADS)

    de Boer, B.; van de Wal, R. S. W.; Bintanja, R.; Lourens, L. J.; Tuenter, E.

    2009-04-01

    Ice volume and temperature for the past 35 Million years is investigated with a 1-D ice-sheet model, simulating ice-sheets on both hemispheres. The simulations include two continental Northern Hemisphere (NH) ice-sheets representative for glaciation on the two major continents, i.e. Eurasia (EAZ) and North America (NAM). Antarctic glaciation is simulated with two separate ice-sheets, respectively for West and East Antarctica. The surface air temperature is reconstructed with an inventive inverse procedure, forced with benthic δ18O data. The procedure linearly relates the temperature to the difference between the modelled and observed marine δ18O 100 years later. The derived temperature, representative for the NH, is used to run the ice-sheet model over 100 years, to obtain a mutually consistent record of marine δ18O, sea level and temperature for the last 35 Ma of the Cenozoic. For Northern Hemispheric glaciations results are good compared to similar simulations performed with a much more comprehensive 3-D ice-sheet model. On average, differences are only 1.9 ˚ C for temperature and 6.1 m for sea level. Results with ice-sheets on both hemispheres are very similar. Most notably, the reconstructed ice volume as function of temperature shows a transition from climate dominated by Antarctic ice volume variation towards NH ice-sheets controlled climate. The transition period falls within the range of interglacials (about -2 to +8 ˚ C with respect to present day) and is thus characterized by lower ice volume changes per ˚ C. The relationship between temperature, sea level and δ18O input is tested with an equilibrium experiment, which results in a linear and symmetric relationship for both temperature and total sea level, providing limited evidence for hysteresis, though transient behaviour is still important. Furthermore results show a rather good comparison with other simulations of Antarctic ice volume and observed sea level and deep-sea temperature.

  3. Design, simulation, and optimization of an RGB polarization independent transmission volume hologram

    NASA Astrophysics Data System (ADS)

    Mahamat, Adoum Hassan

    Volume phase holographic (VPH) gratings have been designed for use in many areas of science and technology such as optical communication, medical imaging, spectroscopy and astronomy. The goal of this dissertation is to design a volume phase holographic grating that provides diffraction efficiencies of at least 70% for the entire visible wavelengths and higher than 90% for red, green, and blue light when the incident light is unpolarized. First, the complete design, simulation and optimization of the volume hologram are presented. The optimization is done using a Monte Carlo analysis to solve for the index modulation needed to provide higher diffraction efficiencies. The solutions are determined by solving the diffraction efficiency equations determined by Kogelnik's two wave coupled-wave theory. The hologram is further optimized using the rigorous coupled-wave analysis to correct for effects of absorption omitted by Kogelnik's method. Second, the fabrication or recording process of the volume hologram is described in detail. The active region of the volume hologram is created by interference of two coherent beams within the thin film. Third, the experimental set up and measurement of some properties including the diffraction efficiencies of the volume hologram, and the thickness of the active region are conducted. Fourth, the polarimetric response of the volume hologram is investigated. The polarization study is developed to provide insight into the effect of the refractive index modulation onto the polarization state and diffraction efficiency of incident light.

  4. Detection of fast flying nanoparticles by light scattering over a large volume

    NASA Astrophysics Data System (ADS)

    Pettazzi, F.; Bäumer, S.; van der Donck, J.; Deutz, A.

    2015-06-01

    is a well-known detection method which is applied in many different scientific and technology domains including atmospheric physics, environmental control, and biology. It allows contactless and remote detection of sub-micron size particles. However, methods for detecting a single fast moving particle smaller than 100 nm are lacking. In the present work we report a preliminary design study of an inline large area detector for nanoparticles larger than 50 nm which move with velocities up to 100 m/s. The detector design is based on light scattering using commercially available components. The presented design takes into account all challenges connected to the inline implementation of the scattering technique in the system: the need for the detector to have a large field of view to cover a volume with a footprint commensurate to an area of 100mm x 100mm, the necessity to sense nanoparticles transported at high velocity, and the requirement of large capture rate with a false detection as low as one false positive per week. The impact of all these stringent requirements on the expected sensitivity and performances of the device is analyzed by mean of a dedicated performance model.

  5. A large volume cell for in situ neutron diffraction studies of hydrothermal crystallizations

    NASA Astrophysics Data System (ADS)

    Xia, Fang; Qian, Gujie; Brugger, Joël; Studer, Andrew; Olsen, Scott; Pring, Allan

    2010-10-01

    A hydrothermal cell with 320 ml internal volume has been designed and constructed for in situ neutron diffraction studies of hydrothermal crystallizations. The cell design adopts a dumbbell configuration assembled with standard commercial stainless steel components and a zero-scattering Ti-Zr alloy sample compartment. The fluid movement and heat transfer are simply driven by natural convection due to the natural temperature gradient along the fluid path, so that the temperature at the sample compartment can be stably sustained by heating the fluid in the bottom fluid reservoir. The cell can operate at temperatures up to 300 °C and pressures up to 90 bars and is suitable for studying reactions requiring a large volume of hydrothermal fluid to damp out the negative effect from the change of fluid composition during the course of the reactions. The capability of the cell was demonstrated by a hydrothermal phase transformation investigation from leucite (KAlSi2O6) to analcime (NaAlSi2O6ṡH2O) at 210 °C on the high intensity powder diffractometer Wombat in ANSTO. The kinetics of the transformation has been resolved by collecting diffraction patterns every 10 min followed by Rietveld quantitative phase analysis. The classical Avrami/Arrhenius analysis gives an activation energy of 82.3±1.1 kJ mol-1. Estimations of the reaction rate under natural environments by extrapolations agree well with petrological observations.

  6. Large-Volume Resonant Microwave Discharge for Plasma Cleaning of a CEBAF 5-Cell SRF Cavity

    SciTech Connect

    J. Mammosser, S. Ahmed, K. Macha, J. Upadhyay, M. Nikoli, S. Popovi, L. Vuakovi

    2012-07-01

    We report the preliminary results on plasma generation in a 5-cell CEBAF superconducting radio-frequency (SRF) cavity for the application of cavity interior surface cleaning. CEBAF currently has {approx}300 of these five cell cavities installed in the Jefferson Lab accelerator which are mostly limited by cavity surface contamination. The development of an in-situ cavity surface cleaning method utilizing a resonant microwave discharge could lead to significant CEBAF accelerator performance improvement. This microwave discharge is currently being used for the development of a set of plasma cleaning procedures targeted to the removal of various organic, metal and metal oxide impurities. These contaminants are responsible for the increase of surface resistance and the reduction of RF performance in installed cavities. The CEBAF five cell cavity volume is {approx} 0.5 m2, which places the discharge in the category of large-volume plasmas. CEBAF cavity has a cylindrical symmetry, but its elliptical shape and transversal power coupling makes it an unusual plasma application, which requires special consideration of microwave breakdown. Our preliminary study includes microwave breakdown and optical spectroscopy, which was used to define the operating pressure range and the rate of removal of organic impurities.

  7. Lotung large-scale seismic test strong motion records. Volume 1, General description: Final report

    SciTech Connect

    Not Available

    1992-03-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4 scale and 1/12 scale) of a nuclear plant concrete containment structure at a seismically active site in Lotung, Taiwan. Extensive instrumentation was deployed to record both structural and ground responses during earthquakes. The experiment, generally referred to as the Lotung Large-Scale Seismic Test (LSST), was used to gather data for soil-structure interaction (SSI) analysis method evaluation and validation as well as for site ground response investigation. A number of earthquakes having local magnitudes ranging from 4.5 to 7.0 have been recorded at the LSST site since the completion of the test facility in September 1985. This report documents the earthquake data, both raw and processed, collected from the LSST experiment. Volume 1 of the report provides general information on site location, instrument types and layout, data acquisition and processing, and data file organization. The recorded data are described chronologically in subsequent volumes of the report.

  8. A large volume uniform plasma generator for the experiments of electromagnetic wave propagation in plasma

    SciTech Connect

    Yang Min; Li Xiaoping; Xie Kai; Liu Donglin; Liu Yanming

    2013-01-15

    A large volume uniform plasma generator is proposed for the experiments of electromagnetic (EM) wave propagation in plasma, to reproduce a 'black out' phenomenon with long duration in an environment of the ordinary laboratory. The plasma generator achieves a controllable approximate uniform plasma in volume of 260 mm Multiplication-Sign 260 mm Multiplication-Sign 180 mm without the magnetic confinement. The plasma is produced by the glow discharge, and the special discharge structure is built to bring a steady approximate uniform plasma environment in the electromagnetic wave propagation path without any other barriers. In addition, the electron density and luminosity distributions of plasma under different discharge conditions were diagnosed and experimentally investigated. Both the electron density and the plasma uniformity are directly proportional to the input power and in roughly reverse proportion to the gas pressure in the chamber. Furthermore, the experiments of electromagnetic wave propagation in plasma are conducted in this plasma generator. Blackout phenomena at GPS signal are observed under this system and the measured attenuation curve is of reasonable agreement with the theoretical one, which suggests the effectiveness of the proposed method.

  9. Alveolar albumin leakage during large tidal volume ventilation and surfactant dysfunction.

    PubMed

    Liu, J M; Evander, E; Zhao, J; Wollmer, P; Jonson, B

    2001-07-01

    Detergent given as an aerosol and large tidal volume ventilation (LTVV) have been observed, by us, to promote lung injury by an additive effect on alveolocapillary barrier function. The surfactant system may be further damaged if protein leakage occurs into the alveoli. The aim was to study the effect of detergent and LTVV on the alveolar leakage of albumin and also the effect of detergent on surface activity of lung washings and lung tissue extracts. Technetium-99m-labelled human serum albumin was given intravenously. The alveolar leakage of albumin was measured after perturbing the surfactant system with the detergent dioctyl sodium sulfosuccinate either singly or in combination with LTVV. Four groups of rabbits were studied after 3 h of experimental ventilation. Surface tension measurements of tissue extracts, lung mechanics and gas exchange did not show any differences between groups. Wet lung weight and albumin leakage were significantly increased in the two groups subjected to LTVV compared with groups given normal tidal volume ventilation. Low doses of detergent did not affect surface activity of lung tissue extracts or alveolar leakage of albumin. LTVV increased alveolar leakage of albumin and produced oedema. No additive effect was seen when detergent and LTVV were combined. PMID:11442575

  10. Detection and Isolation of H5N1 Influenza virus from Large Volumes of Natural Water

    PubMed Central

    Khalenkov, Alexey; Laver, W. Graeme; Webster, Robert G.

    2009-01-01

    Various species of aquatic or wetlands birds can be the natural reservoir of avian influenza A viruses of all hemagglutinin (HA) subtypes. Shedding of the virus into water leads to transmission between waterfowl and is a major threat for epidemics in poultry and pandemics in humans. Concentrations of the influenza virus in natural water reservoirs are often too low to be detected by most methods. The procedure was designed to detect low concentrations of the influenza virus in large volumes of water without the need for costly installations and reagents. The virus was adsorbed onto formalin-fixed erythrocytes and subsequently isolated in chicken embryos. Sensitivity of the method was determined using a reverse-genetic H5N1 virus. A concentration as low as 0.03 of the 50% egg infection dose per milliliter (EID50/ml) of the initial volume of water was effectively detected. The probability of detection was ∼13%, which is comparable to that of detecting the influenza virus M-gene by PCR amplification. The method can be used by field workers, ecologists, ornithologists, and researchers who need a simple method to isolate H5N1 influenza virus from natural reservoirs. The detection and isolation of virus in embryonated chicken eggs may help epidemiologic, genetic, and vaccine studies. PMID:18325605

  11. Development of a Matheatical Dynamic Simulation Model for the New Motion Simulator Used for the Large Space Simulator at ESTEC

    NASA Astrophysics Data System (ADS)

    Messing, Rene

    2012-07-01

    To simulate environmental space conditions for space- craft qualification testing the European Space Agency ESA uses a Large Space Simulator (LSS) in its Test Centre in Noordwijk, the Netherlands. In the LSS a motion system is used, to provide the orientation of an up to five tons heavy spacecraft with respect to an artificial solar beam. The existing motion simulation will be replaced by a new motion system. The new motion system shall be able to orient a spacecraft, defined by its elevation and azimuth angle and provide an eclipse simulation (continuous spinning) around the spacecraft rotation axis. The development of the new motion system has been contracted to APCO Technologies in Switzerland. In addition to the design development done by the con- tractor the Engineering section of the ESTEC Test Centre is in parallel developing a mathematical model simulating the dynamic behaviour of the system. The model shall to serve, during the preliminary design, to verify the selection of the drive units and define the specimen trajectory speed and acceleration profiles. In the further design phase it shall verify the dynamic response, at the spacecraft mounting interface of the unloaded system, against the requirements. In the future it shall predict the dynamic responses of the implemented system for different spacecraft being mounted and operated onto the system. The paper shall give a brief description of the investment history and design developments of the new motion system for the LSS and then give a brief description the different developments steps which are foreseen and which have been already implemented in the mathematical simulation model.

  12. Optical measurements of hydrocarbons emitted from a simulated crevice volume in an engine

    SciTech Connect

    Medina, S.C.; Green, R.M.; Smith, J.R.

    1984-01-01

    The process of hydrocarbon emission from an engine crevice was simulated in an operating research engine by the introduction of a small tube into the combustion chamber. This simulated crevice volume was used to determine the fate of unburned hydrocarbons that interact with the crevice. Shadowgraph photography and spontaneous Raman spectroscopy were used to determine flow patterns, temperatures, and hydrocarbon concentrations 1 mm from the tube opening. Hydrocarbon species were first detected at the tube exit late in the expansion stroke, long after the start of outflow from the simulation volume. A flame was never observed near the tube exit. Unburned hydrocarbons exiting the tube did not undergo rapid oxidation at temperatures up to 1400 Kelvins. 21 references, 11 figures.

  13. Optical measurements of hydrocarbons emitted from a simulated crevice volume in an engine

    SciTech Connect

    Medina, S.C.; Green, R.M.; Smith, J.R.

    1984-01-01

    The process of hydrocarbon emission from an engine crevice was simulated in an operating research engine by the introduction of a small tube into the combustion chamber. This simulated crevice volume was used to determine the fate of unburned hydrocarbons that interact with the crevice. Shadowgraph photography and spontaneous Raman spectroscopy were used to determine flow patterns, temperatures, and hydrocarbon concentrations 1 mm from the tube opening. Hydrocarbon species were first detected at the tube exit late in the expansion stroke, long after the start of outflow from the simulation volume. A flame was never observed near the tube exit. Unburned hydrocarbons exiting the tube did not undergo rapid oxidation at temperatures up to 1400 Kelvins.

  14. Comparison between voxelized, volumized and analytical phantoms applied to radiotherapy simulation with Monte Carlo.

    PubMed

    Abella, V; Miro, R; Juste, B; Verdu, G

    2009-01-01

    The purpose of this paper is to provide a comparison between the different methods utilized for building up anthropomorphic phantoms in Radiotherapy Treatment Plans. A simplified model of the Snyder Head Phantom was used in order to construct an analytical, voxelized and volumized phantom, throughout a segmentation program and different algorithms programmed in Matlab code. The irradiation of the resulting phantoms was simulated with the MCNP5 (Monte Carlo N-Particle) transport code, version 5, and the calculations presented in particle flux maps inside the phantoms by utilizing the FMESH tool, superimposed mesh tally. The different variables involved in the simulation were analyzed, like particle flux, MCNP standard deviation and real simulation CPU time cost. In the end the volumized model resulted to have the largest computer time cost and bigger discrepancies in the particle flux distribution. PMID:19964509

  15. Measurement of the Velocity of Neutrinos from the CNGS Beam with the Large Volume Detector

    NASA Astrophysics Data System (ADS)

    Agafonova, N. Yu.; Aglietta, M.; Antonioli, P.; Ashikhmin, V. V.; Bari, G.; Bertoni, R.; Bressan, E.; Bruno, G.; Dadykin, V. L.; Fulgione, W.; Galeotti, P.; Garbini, M.; Ghia, P. L.; Giusti, P.; Kemp, E.; Mal'gin, A. S.; Miguez, B.; Molinario, A.; Persiani, R.; Pless, I. A.; Ryasny, V. G.; Ryazhskaya, O. G.; Saavedra, O.; Sartorelli, G.; Shakyrianova, I. R.; Selvi, M.; Trinchero, G. C.; Vigorito, C.; Yakushev, V. F.; Zichichi, A.; Razeto, A.

    2012-08-01

    We report the measurement of the time of flight of ˜17GeV νμ on the CNGS baseline (732 km) with the Large Volume Detector (LVD) at the Gran Sasso Laboratory. The CERN-SPS accelerator has been operated from May 10th to May 24th 2012, with a tightly bunched-beam structure to allow the velocity of neutrinos to be accurately measured on an event-by-event basis. LVD has detected 48 neutrino events, associated with the beam, with a high absolute time accuracy. These events allow us to establish the following limit on the difference between the neutrino speed and the light velocity: -3.8×10-6<(vν-c)/c<3.1×10-6 (at 99% C.L.). This value is an order of magnitude lower than previous direct measurements.

  16. Isolation of organic acids from large volumes of water by adsorption chromatography

    USGS Publications Warehouse

    Aiken, George R.

    1984-01-01

    The concentrations of dissolved organic carbon from most natural waters ranges from 1 to 20 milligrams carbon per liter, of which approximately 75 percent are organic acids. These acids can be chromatographically fractionated into hydrophobic organic acids, such as humic substances, and hydrophilic organic acids. To effectively study any of these organic acids, they must be isolated from other organic and inorganic species, and concentrated. Usually, large volumes of water must be processed to obtain sufficient quantities of material, and adsorption chromatography on synthetic, macroporous resins has proven to be a particularly effective method for this purpose. The use of the nonionic Amberlite XAD-8 and Amberlite XAD-4 resins and the anion exchange resin Duolite A-7 for isolating and concentrating organic acids from water is presented.

  17. Measurement of the velocity of neutrinos from the CNGS beam with the large volume detector.

    PubMed

    Agafonova, N Yu; Aglietta, M; Antonioli, P; Ashikhmin, V V; Bari, G; Bertoni, R; Bressan, E; Bruno, G; Dadykin, V L; Fulgione, W; Galeotti, P; Garbini, M; Ghia, P L; Giusti, P; Kemp, E; Mal'gin, A S; Miguez, B; Molinario, A; Persiani, R; Pless, I A; Ryasny, V G; Ryazhskaya, O G; Saavedra, O; Sartorelli, G; Shakyrianova, I R; Selvi, M; Trinchero, G C; Vigorito, C; Yakushev, V F; Zichichi, A; Razeto, A

    2012-08-17

    We report the measurement of the time of flight of ∼17 GeV ν(μ) on the CNGS baseline (732 km) with the Large Volume Detector (LVD) at the Gran Sasso Laboratory. The CERN-SPS accelerator has been operated from May 10th to May 24th 2012, with a tightly bunched-beam structure to allow the velocity of neutrinos to be accurately measured on an event-by-event basis. LVD has detected 48 neutrino events, associated with the beam, with a high absolute time accuracy. These events allow us to establish the following limit on the difference between the neutrino speed and the light velocity: -3.8 × 10(-6) < (v(ν)-c)/c < 3.1 × 10(-6) (at 99% C.L.). This value is an order of magnitude lower than previous direct measurements. PMID:23006352

  18. Monte Carlo calculations of the HPGe detector efficiency for radioactivity measurement of large volume environmental samples.

    PubMed

    Azbouche, Ahmed; Belgaid, Mohamed; Mazrou, Hakim

    2015-08-01

    A fully detailed Monte Carlo geometrical model of a High Purity Germanium detector with a (152)Eu source, packed in Marinelli beaker, was developed for routine analysis of large volume environmental samples. Then, the model parameters, in particular, the dead layer thickness were adjusted thanks to a specific irradiation configuration together with a fine-tuning procedure. Thereafter, the calculated efficiencies were compared to the measured ones for standard samples containing (152)Eu source filled in both grass and resin matrices packed in Marinelli beaker. From this comparison, a good agreement between experiment and Monte Carlo calculation results was obtained highlighting thereby the consistency of the geometrical computational model proposed in this work. Finally, the computational model was applied successfully to determine the (137)Cs distribution in soil matrix. From this application, instructive results were achieved highlighting, in particular, the erosion and accumulation zone of the studied site. PMID:25982445

  19. A large volume 2000 MPA air source for the radiatively driven hypersonic wind tunnel

    SciTech Connect

    Constantino, M

    1999-07-14

    An ultra-high pressure air source for a hypersonic wind tunnel for fluid dynamics and combustion physics and chemistry research and development must provide a 10 kg/s pure air flow for more than 1 s at a specific enthalpy of more than 3000 kJ/kg. The nominal operating pressure and temperature condition for the air source is 2000 MPa and 900 K. A radial array of variable radial support intensifiers connected to an axial manifold provides an arbitrarily large total high pressure volume. This configuration also provides solutions to cross bore stress concentrations and the decrease in material strength with temperature. [hypersonic, high pressure, air, wind tunnel, ground testing

  20. Aerodynamics of the Large-Volume, Flow-Through Detector System. Final report

    SciTech Connect

    Reed, H.; Saric, W.; Laananen, D.; Martinez, C.; Carrillo, R.; Myers, J.; Clevenger, D.

    1996-03-01

    The Large-Volume Flow-Through Detector System (LVFTDS) was designed to monitor alpha radiation from Pu, U, and Am in mixed-waste incinerator offgases; however, it can be adapted to other important monitoring uses that span a number of potential markets, including site remediation, indoor air quality, radon testing, and mine shaft monitoring. Goal of this effort was to provide mechanical design information for installation of LVFTDS in an incinerator, with emphasis on ability to withstand the high temperatures and high flow rates expected. The work was successfully carried out in three stages: calculation of pressure drop through the system, materials testing to determine surrogate materials for wind-tunnel testing, and wind-tunnel testing of an actual configuration.

  1. Efficient Coalescent Simulation and Genealogical Analysis for Large Sample Sizes

    PubMed Central

    Kelleher, Jerome; Etheridge, Alison M; McVean, Gilean

    2016-01-01

    A central challenge in the analysis of genetic variation is to provide realistic genome simulation across millions of samples. Present day coalescent simulations do not scale well, or use approximations that fail to capture important long-range linkage properties. Analysing the results of simulations also presents a substantial challenge, as current methods to store genealogies consume a great deal of space, are slow to parse and do not take advantage of shared structure in correlated trees. We solve these problems by introducing sparse trees and coalescence records as the key units of genealogical analysis. Using these tools, exact simulation of the coalescent with recombination for chromosome-sized regions over hundreds of thousands of samples is possible, and substantially faster than present-day approximate methods. We can also analyse the results orders of magnitude more quickly than with existing methods. PMID:27145223

  2. Secure Large-Scale Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Dan (Technical Monitor)

    2001-01-01

    To fully conduct research that will support the far-term concepts, technologies and methods required to improve the safety of Air Transportation a simulation environment of the requisite degree of fidelity must first be in place. The Virtual National Airspace Simulation (VNAS) will provide the underlying infrastructure necessary for such a simulation system. Aerospace-specific knowledge management services such as intelligent data-integration middleware will support the management of information associated with this complex and critically important operational environment. This simulation environment, in conjunction with a distributed network of supercomputers, and high-speed network connections to aircraft, and to Federal Aviation Administration (FAA), airline and other data-sources will provide the capability to continuously monitor and measure operational performance against expected performance. The VNAS will also provide the tools to use this performance baseline to obtain a perspective of what is happening today and of the potential impact of proposed changes before they are introduced into the system.

  3. All-speed Roe scheme for the large eddy simulation of homogeneous decaying turbulence

    NASA Astrophysics Data System (ADS)

    Li, Xue-song; Li, Xin-liang

    2016-01-01

    As a type of shock-capturing scheme, the traditional Roe scheme fails in large eddy simulation (LES) because it cannot reproduce important turbulent characteristics, such as the famous k-5/3 spectral law, as a consequence of the large numerical dissipation. In this work, the Roe scheme is divided into five parts, namely, ξ, δUp, δpp, δUu, and δpu, which denote basic upwind dissipation, pressure difference-driven modification of interface fluxes, pressure difference-driven modification of pressure, velocity difference-driven modification of interface fluxes, and velocity difference-driven modification of pressure, respectively. Then, the role of each part in the LES of homogeneous decaying turbulence with a low Mach number is investigated. Results show that the parts δUu, δpp, and δUp have little effect on LES. Such minimal effect is integral to computational stability, especially for δUp. The large numerical dissipation is due to ξ and δpu, each of which features a larger dissipation than the sub-grid scale model. On the basis of these conditions, an improved all-speed Roe scheme for LES is proposed. This scheme can provide satisfactory LES results even for coarse grid resolutions with usually adopted second-order reconstructions for the finite volume method.

  4. A finite volume solver for three dimensional debris flow simulations based on a single calibration parameter

    NASA Astrophysics Data System (ADS)

    von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian; Rickenmann, Dieter

    2016-04-01

    Debris flows are frequent natural hazards that cause massive damage. A wide range of debris flow models try to cover the complex flow behavior that arises from the inhomogeneous material mixture of water with clay, silt, sand, and gravel. The energy dissipation between moving grains depends on grain collisions and tangential friction, and the viscosity of the interstitial fine material suspension depends on the shear gradient. Thus a rheology description needs to be sensitive to the local pressure and shear rate, making the three-dimensional flow structure a key issue for flows in complex terrain. Furthermore, the momentum exchange between the granular and fluid phases should account for the presence of larger particles. We model the fine material suspension with a Herschel-Bulkley rheology law, and represent the gravel with the Coulomb-viscoplastic rheology of Domnik & Pudasaini (Domnik et al. 2013). Both composites are described by two phases that can mix; a third phase accounting for the air is kept separate to account for the free surface. The fluid dynamics are solved in three dimensions using the finite volume open-source code OpenFOAM. Computational costs are kept reasonable by using the Volume of Fluid method to solve only one phase-averaged system of Navier-Stokes equations. The Herschel-Bulkley parameters are modeled as a function of water content, volumetric solid concentration of the mixture, clay content and its mineral composition (Coussot et al. 1989, Yu et al. 2013). The gravel phase properties needed for the Coulomb-viscoplastic rheology are defined by the angle of repose of the gravel. In addition to this basic setup, larger grains and the corresponding grain collisions can be introduced by a coupled Lagrangian particle simulation. Based on the local Savage number a diffusive term in the gravel phase can activate phase separation. The resulting model can reproduce the sensitivity of the debris flow to water content and channel bed roughness, as

  5. Anatomic Landmarks Versus Fiducials for Volume-Staged Gamma Knife Radiosurgery for Large Arteriovenous Malformations

    SciTech Connect

    Petti, Paula L. . E-mail: ppetti@radonc.ucsf.edu; Coleman, Joy; McDermott, Michael; Smith, Vernon; Larson, David A.

    2007-04-01

    Purpose: The purpose of this investigation was to compare the accuracy of using internal anatomic landmarks instead of surgically implanted fiducials in the image registration process for volume-staged gamma knife (GK) radiosurgery for large arteriovenous malformations. Methods and Materials: We studied 9 patients who had undergone 10 staged GK sessions for large arteriovenous malformations. Each patient had fiducials surgically implanted in the outer table of the skull at the first GK treatment. These markers were imaged on orthogonal radiographs, which were scanned into the GK planning system. For the same patients, 8-10 pairs of internal landmarks were retrospectively identified on the three-dimensional time-of-flight magnetic resonance imaging studies that had been obtained for treatment. The coordinate transformation between the stereotactic frame space for subsequent treatment sessions was then determined by point matching, using four surgically embedded fiducials and then using four pairs of internal anatomic landmarks. In both cases, the transformation was ascertained by minimizing the chi-square difference between the actual and the transformed coordinates. Both transformations were then evaluated using the remaining four to six pairs of internal landmarks as the test points. Results: Averaged over all treatment sessions, the root mean square discrepancy between the coordinates of the transformed and actual test points was 1.2 {+-} 0.2 mm using internal landmarks and 1.7 {+-} 0.4 mm using the surgically implanted fiducials. Conclusion: The results of this study have shown that using internal landmarks to determine the coordinate transformation between subsequent magnetic resonance imaging scans for volume-staged GK arteriovenous malformation treatment sessions is as accurate as using surgically implanted fiducials and avoids an invasive procedure.

  6. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application

  7. Lens-free optical tomographic microscope with a large imaging volume on a chip

    PubMed Central

    Isikman, Serhan O.; Bishara, Waheb; Mavandadi, Sam; Yu, Frank W.; Feng, Steve; Lau, Randy; Ozcan, Aydogan

    2011-01-01

    We present a lens-free optical tomographic microscope, which enables imaging a large volume of approximately 15 mm3 on a chip, with a spatial resolution of < 1 μm ×  < 1 μm ×  < 3 μm in x, y and z dimensions, respectively. In this lens-free tomography modality, the sample is placed directly on a digital sensor array with, e.g., ≤ 4 mm distance to its active area. A partially coherent light source placed approximately 70 mm away from the sensor is employed to record lens-free in-line holograms of the sample from different viewing angles. At each illumination angle, multiple subpixel shifted holograms are also recorded, which are digitally processed using a pixel superresolution technique to create a single high-resolution hologram of each angular projection of the object. These superresolved holograms are digitally reconstructed for an angular range of ± 50°, which are then back-projected to compute tomograms of the sample. In order to minimize the artifacts due to limited angular range of tilted illumination, a dual-axis tomography scheme is adopted, where the light source is rotated along two orthogonal axes. Tomographic imaging performance is quantified using microbeads of different dimensions, as well as by imaging wild-type Caenorhabditis elegans. Probing a large volume with a decent 3D spatial resolution, this lens-free optical tomography platform on a chip could provide a powerful tool for high-throughput imaging applications in, e.g., cell and developmental biology. PMID:21504943

  8. AUTOMATED PARAMETRIC EXECUTION AND DOCUMENTATION FOR LARGE-SCALE SIMULATIONS

    SciTech Connect

    R. L. KELSEY; ET AL

    2001-03-01

    A language has been created to facilitate the automatic execution of simulations for purposes of enabling parametric study and test and evaluation. Its function is similar in nature to a job-control language, but more capability is provided in that the language extends the notion of literate programming to job control. Interwoven markup tags self document and define the job control process. The language works in tandem with another language used to describe physical systems. Both languages are implemented in the Extensible Markup Language (XML). A user describes a physical system for simulation and then creates a set of instructions for automatic execution of the simulation. Support routines merge the instructions with the physical-system description, execute the simulation the specified number of times, gather the output data, and document the process and output for the user. The language enables the guided exploration of a parameter space and can be used for simulations that must determine optimal solutions to particular problems. It is generalized enough that it can be used with any simulation input files that are described using XML. XML is shown to be useful as a description language, an interchange language, and a self-documented language.

  9. Large eddy simulation of compressible turbulent channel and annular pipe flows with system and wall rotations

    NASA Astrophysics Data System (ADS)

    Lee, Joon Sang

    The compressible filtered Navier-Stokes equations were solved using a second order accurate finite volume method with low Mach number preconditioning. A dynamic subgrid-scale stress model accounted for the subgrid-scale turbulence. The study focused on the effects of buoyancy and rotation on the structure of turbulence and transport processes including heat transfer. Several different physical arrangements were studied as outlined below. The effects of buoyancy were first studied in a vertical channel using large eddy simulation (LES). The walls were maintained at constant temperatures, one heated and the other cooled. Results showed that aiding and opposing buoyancy forces emerge near the heated and cooled walls, respectively. In the aiding flow, the turbulent intensities and heat transfer were suppressed at large values of Grashof number. In the opposing flow, however, turbulence was enhanced with increased velocity fluctuations. Another buoyancy study considered turbulent flow in a vertically oriented annulus. Isoflux wall boundary conditions with low and high heating were imposed on the inner wall while the outer wall was adiabatic. The results showed that the strong heating and buoyancy force caused distortions of the flow structure resulting in reduction of turbulent intensities, shear stress, and turbulent heat flux, particularly near the heated wall. Flow in an annular pipe with and without an outer wall rotation about its axis was first investigated at moderate Reynolds numbers. When the outer pipe wall was rotated, a significant reduction of turbulent kinetic energy was realized near the rotating wall. Secondly, a large eddy simulation has been performed to investigate the effect of swirl on the heat and momentum transfer in an annular pipe flow with a rotating inner wall. The simulations indicated that the Nusselt number and the wall friction coefficient increased with increasing rotation speed of the wall. It was also observed that the axial velocity

  10. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Astrophysics Data System (ADS)

    Canuto, V. M.

    1994-06-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon

  11. Designing an elastomeric binder for large-volume-change electrodes for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Chen, Zonghai

    It is of commercial importance to develop high capacity negative and positive electrode materials for lithium-ion batteries to meet the energy requirements of portable electronic devices. Excellent capacity retention has been achieved for thin sputtered films of amorphous Si, Ge and Si-Sn alloys even when cycled to 2000 mAh/g and above, which suggests that amorphous alloys are capable of extended cycling. However, PVDF-based composite electrodes incorporating a-Si0.64Sn0.36/Ag powder (10 wt% silver coating) (˜10mum) still suffer from severe capacity fading because of the huge volumetric changes of a-Si0.64Sn0.36/Ag during charge/discharge cycling. It is the objective of this thesis to understand the problem scientifically and to propose practical solutions to solve this problem. Mechanical studies of binders for lithium battery electrodes have never been reported in the literature. The mechanical properties of commonly used binders, such as poly(vinylidene fluoride) (PVDF), haven't been challenged because commercially used active materials, such as LiCoO2 and graphite, have small volumetric changes (<10%) during charge/discharge cycling. However, the recently proposed metallic alloys have huge volumetric changes (up to 250%) during cycling. In this case, the mechanical properties of the binder become critical. A tether model is proposed to qualitatively understand the capacity fading of high-volume-change electrodes, and to predict the properties of a good binder system. A crosslinking/coupling route was used to modify the binder system according to the requirements of the tether model. A poly(vinylidene fluoride-tetrafluoroethylenepropylene)-based elastomeric binder system was designed to successfully improve the capacity retention of a-Si0.64 Sn0.36/Ag composite electrodes. In this thesis, it has also proven nontrivial to maximize the capacity retention of large-volume-change electrodes even when a fixed elastomeric binder system was used. The parameters that

  12. Frontiers of High-Pressure Research: Next Generation Large Volume Gem Anvil Devices

    NASA Astrophysics Data System (ADS)

    Hemley, R. J.; Yan, C.; Xu, J.; Mao, W.; Mao, H.

    2003-12-01

    By any measure, the diamond anvil cell has revolutionized static high-pressure research, and in particular experimental study of Earth and planetary deep interiors. Despite the prowess of the technique, however, its range of applicability is in many ways quite limited and its potential not yet fully realized. We have embarked on a program to develop the next generation high-pressure devices that will allow new classes of in situ high P-T measurements critical to understanding the structure, dynamics, and evolution of planetary bodies. A crucial component of this effort is the enlargement of sample volume without sacrificing the unmatched pressure range and versatility of diamond anvil cells. In conventional studies, pressures to several hundred GPa are generated on sample volumes down to the picoliter range. The small sample size is determined by the availability of perfect diamonds as anvils, which are currently of order of 0.2-0.4 up to a few carats. The small sample size has precluded, or greatly limited the accuracy of, certain classes of high pressure experiments. The production of high-quality single-crystal diamond by microwave plasma chemical vapor deposition (CVD) at very high growth rate of 50-150 μ m/h has opened new opportunities for the creation of large perfect single crystal diamond as anvils.1 This morphology, photoluminescence, Raman spectra, and mechanical properties of the CVD diamond have been examined in detail. Of particular interest is our finding of very high strength and improved optical properties of CVD diamond annealed at high pressures and temperatures.2 In addition, hybrid conventional synthetic/CVD single crystals have been successfully used to generate pressures in the multimegabar range (>200 GPa).3 A parallel initiative involves the continued development of moissanite anvils, which can already be produced as large, perfect crystals and can reach pressures above 60 GPa. Recent advances include the design and fabrication of

  13. Determination of 235U enrichment with a large volume CZT detector

    NASA Astrophysics Data System (ADS)

    Mortreau, Patricia; Berndt, Reinhard

    2006-01-01

    Room-temperature CdZnTe and CdTe detectors have been routinely used in the field of Nuclear Safeguards for many years [Ivanov et al., Development of large volume hemispheric CdZnTe detectors for use in safeguards applications, ESARDA European Safeguards Research and Development Association, Le Corum, Montpellier, France, 1997, p. 447; Czock and Arlt, Nucl. Instr. and Meth. A 458 (2001) 175; Arlt et al., Nucl. Instr. and Meth. A 428 (1999) 127; Lebrun et al., Nucl. Instr. and Meth. A 448 (2000) 598; Aparo et al., Development and implementation of compact gamma spectrometers for spent fuel measurements, in: Proceedings, 21st Annual ESARDA, 1999; Arlt and Rudsquist, Nucl. Instr. and Meth. A 380 (1996) 455; Khusainov et al., High resolution pin type CdTe detectors for the verification of nuclear material, in: Proceedings, 17th Annual ESARDA European Safeguards Research and Development Association, 1995; Mortreau and Berndt, Nucl. Instr. and Meth. A 458 (2001) 183; Ruhter et al., UCRL-JC-130548, 1998; Abbas et al., Nucl. Instr. and Meth. A 405 (1998) 153; Ruhter and Gunnink, Nucl. Instr. and Meth. A 353 (1994) 716]. Due to their performance and small size, they are ideal detectors for hand-held applications such as verification of spent and fresh fuel, U/Pu attribute tests as well as for the determination of 235U enrichment. The hemispherical CdZnTe type produced by RITEC (Riga, Latvia) [Ivanov et al., 1997] is the most widely used detector in the field of inspection. With volumes ranging from 2 to 1500 mm 3, their spectral performance is such that the use of electronic processing to correct the pulse shape is not required. This paper reports on the work carried out with a large volume (15×15×7.5 mm 3) and high efficiency hemispherical CdZnTe detector for the determination of 235U enrichment. The measurements were made with certified uranium samples whose enrichment ranging from 0.31% to 92.42%, cover the whole range of in-field measurement conditions. The interposed

  14. Large eddy simulations as a parameterization tool for canopy-structure X VOC-flux interactions

    NASA Astrophysics Data System (ADS)

    Kenny, William; Bohrer, Gil; Chatziefstratiou, Efthalia

    2015-04-01

    We have been working to develop a new post-processing model - High resolution VOC Atmospheric Chemistry in Canopies (Hi-VACC) - which resolves the dispersion and chemistry of reacting chemical species given their emission rates from the vegetation and soil, driven by high resolution meteorological forcing and wind fields from various high resolution atmospheric regional and large-eddy simulations. Hi-VACC reads in fields of pressure, temperature, humidity, air density, short-wave radiation, wind (3-D u, v and w components) and sub-grid-scale turbulence that were simulated by a high resolution atmospheric model. This meteorological forcing data is provided as snapshots of 3-D fields. We have tested it using a number of RAMS-based Forest Large Eddy Simulation (RAFLES) runs. This can then be used for parameterization of the effects of canopy structure on VOC fluxes. RAFLES represents both drag and volume restriction by the canopy over an explicit 3-D domain. We have used these features to show the effects of canopy structure on fluxes of momentum, heat, and water in heterogeneous environments at the tree-crown scale by modifying the canopy structure representing it as both homogeneous and realistically heterogeneous. We combine this with Hi-VACC's capabilities to model dispersion and chemistry of reactive VOCs to parameterize the fluxes of these reactive species with respect to canopy structure. The high resolution capabilities of Hi-VACC coupled with RAFLES allows for sensitivity analysis to determine important structural considerations in sub-grid-scale parameterization of these phenomena in larger models.

  15. High-rate Plastic Deformation of Nanocrystalline Tantalum to Large Strains: Molecular Dynamics Simulation

    SciTech Connect

    Rudd, R E

    2009-02-05

    Recent advances in the ability to generate extremes of pressure and temperature in dynamic experiments and to probe the response of materials has motivated the need for special materials optimized for those conditions as well as a need for a much deeper understanding of the behavior of materials subjected to high pressure and/or temperature. Of particular importance is the understanding of rate effects at the extremely high rates encountered in those experiments, especially with the next generation of laser drives such as at the National Ignition Facility. Here we use large-scale molecular dynamics (MD) simulations of the high-rate deformation of nanocrystalline tantalum to investigate the processes associated with plastic deformation for strains up to 100%. We use initial atomic configurations that were produced through simulations of solidification in the work of Streitz et al [Phys. Rev. Lett. 96, (2006) 225701]. These 3D polycrystalline systems have typical grain sizes of 10-20 nm. We also study a rapidly quenched liquid (amorphous solid) tantalum. We apply a constant volume (isochoric), constant temperature (isothermal) shear deformation over a range of strain rates, and compute the resulting stress-strain curves to large strains for both uniaxial and biaxial compression. We study the rate dependence and identify plastic deformation mechanisms. The identification of the mechanisms is facilitated through a novel technique that computes the local grain orientation, returning it as a quaternion for each atom. This analysis technique is robust and fast, and has been used to compute the orientations on the fly during our parallel MD simulations on supercomputers. We find both dislocation and twinning processes are important, and they interact in the weak strain hardening in these extremely fine-grained microstructures.

  16. Aircraft Reply and Interference Environment Simulator (ARIES) hardware principles of operation, volume 1

    NASA Astrophysics Data System (ADS)

    Mancus, Edward

    1989-10-01

    The Aircraft Reply and Interference Environment Simulator (ARIES) makes possible the performance assessment of the Mode Select (Mode S) sensor under its specific maximum aircraft load. To do this, ARIES operates upon disk files for traffic model and interference to generate simulated aircraft replies and fruit, feeding them to the sensor at radio frequency. Support documentation for ARIES consists of: (1) the ARIES Hardware Maintenance Manual: Volume 1 (DOT/FAA/CT-TN88/3); (2) Appendixes of the Hardware Maintenance Manual: Volume 2; (3) the ARIES Hardware Principles of Operation: Volume 1 (DOT/FAA/CT-TN88/4-1); (4) Appendixes of the Hardware Principles of Operation: Volume 2; (5) ARIES Software Principles of Operation (DOT/FAA/CT-TN87/16); and (6) ARIES Software User's Manual (DOT/FAA/CT-TN88/15). This document, the ARIES Hardware Principles of Operation, Volume 1, explains the theory of operation of the ARIES special purpose hardware designed and fabricated at the Federal Aviation Administration Technical Center. Each hardware device is discussed. Functional block diagrams, signal timing diagrams, and state timing diagrams are included where appropriate.

  17. Large Volume Coagulation Utilizing Multiple Cavitation Clouds Generated by Array Transducer Driven by 32 Channel Drive Circuits

    NASA Astrophysics Data System (ADS)

    Nakamura, Kotaro; Asai, Ayumu; Sasaki, Hiroshi; Yoshizawa, Shin; Umemura, Shin-ichiro

    2013-07-01

    High-intensity focused ultrasound (HIFU) treatment is a noninvasive treatment, in which focused ultrasound is generated outside the body and coagulates a diseased tissue. The advantage of this method is minimal physical and mental stress to the patient, and the disadvantage is the long treatment time caused by the smallness of the therapeutic volume by a single exposure. To improve the efficiency and shorten the treatment time, we are focusing attention on utilizing cavitation bubbles. The generated microbubbles can convert the acoustic energy into heat with a high efficiency. In this study, using the class D amplifiers, which we have developed, to drive the array transducer, we demonstrate a new method to coagulate a large volume by a single HIFU exposure through generating cavitation bubbles distributing in a large volume and vibrating all of them. As a result, the coagulated volume by the proposed method was 1.71 times as large as that of the conventional method.

  18. Large-Scale Liquid Simulation on Adaptive Hexahedral Grids.

    PubMed

    Ferstl, Florian; Westermann, Rudiger; Dick, Christian

    2014-10-01

    Regular grids are attractive for numerical fluid simulations because they give rise to efficient computational kernels. However, for simulating high resolution effects in complicated domains they are only of limited suitability due to memory constraints. In this paper we present a method for liquid simulation on an adaptive octree grid using a hexahedral finite element discretization, which reduces memory requirements by coarsening the elements in the interior of the liquid body. To impose free surface boundary conditions with second order accuracy, we incorporate a particular class of Nitsche methods enforcing the Dirichlet boundary conditions for the pressure in a variational sense. We then show how to construct a multigrid hierarchy from the adaptive octree grid, so that a time efficient geometric multigrid solver can be used. To improve solver convergence, we propose a special treatment of liquid boundaries via composite finite elements at coarser scales. We demonstrate the effectiveness of our method for liquid simulations that would require hundreds of millions of simulation elements in a non-adaptive regime. PMID:26357387

  19. Large-scale simulation of the human arterial tree.

    PubMed

    Grinberg, L; Anor, T; Madsen, J R; Yakhot, A; Karniadakis, G E

    2009-02-01

    1. Full-scale simulations of the virtual physiological human (VPH) will require significant advances in modelling, multiscale mathematics, scientific computing and further advances in medical imaging. Herein, we review some of the main issues that need to be resolved in order to make three-dimensional (3D) simulations of blood flow in the human arterial tree feasible in the near future. 2. A straightforward approach is computationally prohibitive even on the emerging petaflop supercomputers, so a three-level hierarchical approach based on vessel size is required, consisting of: (i) a macrovascular network (MaN); (ii) a mesovascular network (MeN); and (iii) a microvascular network (MiN). We present recent simulations of MaN obtained by solving the 3D Navier-Stokes equations on arterial networks with tens of arteries and bifurcations and accounting for the neglected dynamics through proper boundary conditions. 3. A multiscale simulation coupling MaN-MeN-MiN and running on hundreds of thousands of processors on petaflop computers will require no more than a few CPU hours per cardiac cycle within the next 5 years. The rapidly growing capacity of supercomputing centres opens up the possibility of simulation studies of cardiovascular diseases, drug delivery, perfusion in the brain and other pathologies. PMID:18671721

  20. Data-driven RANS for simulations of large wind farms

    NASA Astrophysics Data System (ADS)

    Iungo, G. V.; Viola, F.; Ciri, U.; Rotea, M. A.; Leonardi, S.

    2015-06-01

    In the wind energy industry there is a growing need for real-time predictions of wind turbine wake flows in order to optimize power plant control and inhibit detrimental wake interactions. To this aim, a data-driven RANS approach is proposed in order to achieve very low computational costs and adequate accuracy through the data assimilation procedure. The RANS simulations are implemented with a classical Boussinesq hypothesis and a mixing length turbulence closure model, which is calibrated through the available data. High-fidelity LES simulations of a utility-scale wind turbine operating with different tip speed ratios are used as database. It is shown that the mixing length model for the RANS simulations can be calibrated accurately through the Reynolds stress of the axial and radial velocity components, and the gradient of the axial velocity in the radial direction. It is found that the mixing length is roughly invariant in the very near wake, then it increases linearly with the downstream distance in the diffusive region. The variation rate of the mixing length in the downstream direction is proposed as a criterion to detect the transition between near wake and transition region of a wind turbine wake. Finally, RANS simulations were performed with the calibrated mixing length model, and a good agreement with the LES simulations is observed.

  1. Large-scale multi-agent transportation simulations

    NASA Astrophysics Data System (ADS)

    Cetin, Nurhan; Nagel, Kai; Raney, Bryan; Voellmy, Andreas

    2002-08-01

    It is now possible to microsimulate the traffic of whole metropolitan areas with 10 million travelers or more, "micro" meaning that each traveler is resolved individually as a particle. In contrast to physics or chemistry, these particles have internal intelligence; for example, they know where they are going. This means that a transportation simulation project will have, besides the traffic microsimulation, modules which model this intelligent behavior. The most important modules are for route generation and for demand generation. Demand is generated by each individual in the simulation making a plan of activities such as sleeping, eating, working, shopping, etc. If activities are planned at different locations, they obviously generate demand for transportation. This however is not enough since those plans are influenced by congestion which initially is not known. This is solved via a relaxation method, which means iterating back and forth between the activities/routes generation and the traffic simulation.

  2. Manufacturing Process Simulation of Large-Scale Cryotanks

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  3. Large-eddy simulations of flows in a ramjet combustor

    NASA Astrophysics Data System (ADS)

    Jou, Wen-Huei; Menon, Suresh

    The oscillatory cold flow in a ramjet combustor configuration is presently addressed by a numerical simulation method which gives attention to the interaction between the flowfield's vorticity and acoustic components, when the reduced frequency of the flow, based on the speed of sound, is of the order of unity. The numerical model has indicated that the combustor's interior must be isolated from the external region region by a choked nozzle. The numerical simulations thus obtained are able to exclude the effects of artificially imposed outflow-boundary conditions. The unsteady flow fields near the shear layer separation point in the nozzle region are investigated.

  4. Modeling of Large Avionic Structures in Electrical Network Simulations

    NASA Astrophysics Data System (ADS)

    Piche, A.; Perraud, R.; Lochot, C.

    2012-05-01

    The extensive introduction of carbon fiber reinforced plastics (CFRP) in conjunction with an increase of electrical systems in aircraft has led to new electromagnetic issues. This situation has reinforced the need for numerical simulation early in the design phase. In this context, we have proposed [1] a numerical methodology to deal with 3D CFRP avionic structures in time domain simulations at system level. This paper presents the last results on this subject and particularly the modeling of A350 fuselage in SABER computation containing the aircraft power distribution.

  5. Simulations of the formation of large-scale structure

    NASA Astrophysics Data System (ADS)

    White, S. D. M.

    Numerical studies related to the simulation of structure growth are examined. The linear development of fluctuations in the early universe is studied. The research of Aarseth, Gott, and Turner (1979) based on N-body integrators that obtained particle accelerations by direct summation of the forces due to other objects is discussed. Consideration is given to the 'pancake theory' of Zel'dovich (1970) for the evolution from adiabatic initial fluctuation, the neutrino-dominated universe models of White, Frenk, and Davis (1983), and the simulations of Davis et al. (1985).

  6. Large-volume methacrylate monolith for plasmid purification. Process engineering approach to synthesis and application.

    PubMed

    Danquah, Michael K; Forde, Gareth M

    2008-04-25

    The extent of exothermicity associated with the construction of large-volume methacrylate monolithic columns has somewhat obstructed the realisation of large-scale rapid biomolecule purification especially for plasmid-based products which have proven to herald future trends in biotechnology. A novel synthesis technique via a heat expulsion mechanism was employed to prepare a 40 mL methacrylate monolith with a homogeneous radial pore structure along its thickness. Radial temperature gradient was recorded to be only 1.8 degrees C. Maximum radial temperature recorded at the centre of the monolith was 62.3 degrees C, which was only 2.3 degrees C higher than the actual polymerisation temperature. Pore characterisation of the monolithic polymer showed unimodal pore size distributions at different radial positions with an identical modal pore size of 400 nm. Chromatographic characterisation of the polymer after functionalisation with amino groups displayed a persistent dynamic binding capacity of 15.5 mg of plasmid DNA/mL. The maximum pressure drop recorded was only 0.12 MPa at a flow rate of 10 mL/min. The polymer demonstrated rapid separation ability by fractionating Escherichia coli DH5alpha-pUC19 clarified lysate in only 3 min after loading. The plasmid sample collected after the fast purification process was tested to be a homogeneous supercoiled plasmid with DNA electrophoresis and restriction analysis. PMID:18329651

  7. Major risk from rapid, large-volume landslides in Europe (EU Project RUNOUT)

    NASA Astrophysics Data System (ADS)

    Kilburn, Christopher R. J.; Pasuto, Alessandro

    2003-08-01

    Project RUNOUT has investigated methods for reducing the risk from large-volume landslides in Europe, especially those involving rapid rates of emplacement. Using field data from five test sites (Bad Goisern and Köfels in Austria, Tessina and Vajont in Italy, and the Barranco de Tirajana in Gran Canaria, Spain), the studies have developed (1) techniques for applying geomorphological investigations and optical remote sensing to map landslides and their evolution; (2) analytical, numerical, and cellular automata models for the emplacement of sturzstroms and debris flows; (3) a brittle-failure model for forecasting catastrophic slope failure; (4) new strategies for integrating large-area Global Positioning System (GPS) arrays with local geodetic monitoring networks; (5) methods for raising public awareness of landslide hazards; and (6) Geographic Information System (GIS)-based databases for the test areas. The results highlight the importance of multidisciplinary studies of landslide hazards, combining subjects as diverse as geology and geomorphology, remote sensing, geodesy, fluid dynamics, and social profiling. They have also identified key goals for an improved understanding of the physical processes that govern landslide collapse and runout, as well as for designing strategies for raising public awareness of landslide hazards and for implementing appropriate land management policies for reducing landslide risk.

  8. The big fat LARS - a LArge Reservoir Simulator for hydrate formation and gas production

    NASA Astrophysics Data System (ADS)

    Beeskow-Strauch, Bettina; Spangenberg, Erik; Schicks, Judith M.; Giese, Ronny; Luzi-Helbing, Manja; Priegnitz, Mike; Klump, Jens; Thaler, Jan; Abendroth, Sven

    2013-04-01

    Simulating natural scenarios on lab scale is a common technique to gain insight into geological processes with moderate effort and expenses. Due to the remote occurrence of gas hydrates, their behavior in sedimentary deposits is largely investigated on experimental set ups in the laboratory. In the framework of the submarine gas hydrate research project (SUGAR) a large reservoir simulator (LARS) with an internal volume of 425 liter has been designed, built and tested. To our knowledge this is presently a word-wide unique set up. Because of its large volume it is suitable for pilot plant scale tests on hydrate behavior in sediments. That includes not only the option of systematic tests on gas hydrate formation in various sedimentary settings but also the possibility to mimic scenarios for the hydrate decomposition and subsequent natural gas extraction. Based on these experimental results various numerical simulations can be realized. Here, we present the design and the experimental set up of LARS. The prerequisites for the simulation of a natural gas hydrate reservoir are porous sediments, methane, water, low temperature and high pressure. The reservoir is supplied by methane-saturated and pre-cooled water. For its preparation an external gas-water mixing stage is available. The methane-loaded water is continuously flushed into LARS as finely dispersed fluid via bottom-and-top-located sparger. The LARS is equipped with a mantle cooling system and can be kept at a chosen set temperature. The temperature distribution is monitored at 14 reasonable locations throughout the reservoir by Pt100 sensors. Pressure needs are realized using syringe pump stands. A tomographic system, consisting of a 375-electrode-configuration is attached to the mantle for the monitoring of hydrate distribution throughout the entire reservoir volume. Two sets of tubular polydimethylsiloxan-membranes are applied to determine gas-water ratio within the reservoir using the effect of permeability

  9. Large-eddy simulation of nitrogen injection at trans- and supercritical conditions

    NASA Astrophysics Data System (ADS)

    Müller, Hagen; Niedermeier, Christoph A.; Matheis, Jan; Pfitzner, Michael; Hickel, Stefan

    2016-01-01

    Large-eddy simulations (LESs) of cryogenic nitrogen injection into a warm environment at supercritical pressure are performed and real-gas thermodynamics models and subgrid-scale (SGS) turbulence models are evaluated. The comparison of different SGS models — the Smagorinsky model, the Vreman model, and the adaptive local deconvolution method — shows that the representation of turbulence on the resolved scales has a notable effect on the location of jet break-up, whereas the particular modeling of unresolved scales is less important for the overall mean flow field evolution. More important are the models for the fluid's thermodynamic state. The injected fluid is either in a supercritical or in a transcritical state and undergoes a pseudo-boiling process during mixing. Such flows typically exhibit strong density gradients that delay the instability growth and can lead to a redistribution of turbulence kinetic energy from the radial to the axial flow direction. We evaluate novel volume-translation methods on the basis of the cubic Peng-Robinson equation of state in the framework of LES. At small extra computational cost, their application considerably improves the simulation results compared to the standard formulation. Furthermore, we found that the choice of inflow temperature is crucial for the reproduction of the experimental results and that heat addition within the injector can affect the mean flow field in comparison to results with an adiabatic injector.

  10. Low-Dissipation Advection Schemes Designed for Large Eddy Simulations of Hypersonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    White, Jeffrey A.; Baurle, Robert A.; Fisher, Travis C.; Quinlan, Jesse R.; Black, William S.

    2012-01-01

    The 2nd-order upwind inviscid flux scheme implemented in the multi-block, structured grid, cell centered, finite volume, high-speed reacting flow code VULCAN has been modified to reduce numerical dissipation. This modification was motivated by the desire to improve the codes ability to perform large eddy simulations. The reduction in dissipation was accomplished through a hybridization of non-dissipative and dissipative discontinuity-capturing advection schemes that reduces numerical dissipation while maintaining the ability to capture shocks. A methodology for constructing hybrid-advection schemes that blends nondissipative fluxes consisting of linear combinations of divergence and product rule forms discretized using 4th-order symmetric operators, with dissipative, 3rd or 4th-order reconstruction based upwind flux schemes was developed and implemented. A series of benchmark problems with increasing spatial and fluid dynamical complexity were utilized to examine the ability of the candidate schemes to resolve and propagate structures typical of turbulent flow, their discontinuity capturing capability and their robustness. A realistic geometry typical of a high-speed propulsion system flowpath was computed using the most promising of the examined schemes and was compared with available experimental data to demonstrate simulation fidelity.

  11. The oligocene Lund Tuff, Great Basin, USA: A very large volume monotonous intermediate

    USGS Publications Warehouse

    Maughan, L.L.; Christiansen, E.H.; Best, M.G.; Gromme, C.S.; Deino, A.L.; Tingey, D.G.

    2002-01-01

    Unusual monotonous intermediate ignimbrites consist of phenocryst-rich dacite that occurs as very large volume (> 1000 km3) deposits that lack systematic compositional zonation, comagmatic rhyolite precursors, and underlying plinian beds. They are distinct from countless, usually smaller volume, zoned rhyolite-dacite-andesite deposits that are conventionally believed to have erupted from magma chambers in which thermal and compositional gradients were established because of sidewall crystallization and associated convective fractionation. Despite their great volume, or because of it, monotonous intermediates have received little attention. Documentation of the stratigraphy, composition, and geologic setting of the Lund Tuff - one of four monotonous intermediate tuffs in the middle-Tertiary Great Basin ignimbrite province - provides insight into its unusual origin and, by implication, the origin of other similar monotonous intermediates. The Lund Tuff is a single cooling unit with normal magnetic polarity whose volume likely exceeded 3000 km3. It was emplaced 29.02 ?? 0.04 Ma in and around the coeval White Rock caldera which has an unextended north-south diameter of about 50 km. The tuff is monotonous in that its phenocryst assemblage is virtually uniform throughout the deposit: plagioclase > quartz ??? hornblende > biotite > Fe-Ti oxides ??? sanidine > titanite, zircon, and apatite. However, ratios of phenocrysts vary by as much as an order of magnitude in a manner consistent with progressive crystallization in the pre-eruption chamber. A significant range in whole-rock chemical composition (e.g., 63-71 wt% SiO2) is poorly correlated with phenocryst abundance. These compositional attributes cannot have been caused wholly by winnowing of glass from phenocrysts during eruption, as has been suggested for the monotonous intermediate Fish Canyon Tuff. Pumice fragments are also crystal-rich, and chemically and mineralogically indistinguishable from bulk tuff. We

  12. Evaluation of methods for calculating volume fraction in Eulerian-Lagrangian multiphase flow simulations

    NASA Astrophysics Data System (ADS)

    Diggs, Angela; Balachandar, S.

    2016-05-01

    The present work addresses numerical methods required to compute particle volume fraction or number density. Local volume fraction of the lth particle, αl, is the quantity of foremost importance in calculating the gas-mediated particle-particle interaction effect in multiphase flows. A general multiphase flow with a distribution of Lagrangian particles inside a fluid flow discretized on an Eulerian grid is considered. Particle volume fraction is needed both as a Lagrangian quantity associated with each particle and also as an Eulerian quantity associated with the grid cell for Eulerian-Lagrangian simulations. In Grid-Based (GB) methods the particle volume fraction is first obtained within each grid cell as an Eulerian quantity and then the local particle volume fraction associated with any Lagrangian particle can be obtained from interpolation. The second class of methods presented are Particle-Based (PB) methods, where particle volume fraction will first be obtained at each particle as a Lagrangian quantity, which then can be projected onto the Eulerian grid. Traditionally, the GB methods are used in multiphase flow, but sub-grid resolution can be obtained through use of the PB methods. By evaluating the total error, and its discretization, bias and statistical error components, the performance of the different PB methods is compared against several common GB methods of calculating volume fraction. The standard von Neumann error analysis technique has been adapted for evaluation of rate of convergence of the different methods. The discussion and error analysis presented focus on the volume fraction calculation, but the methods can be extended to obtain field representations of other Lagrangian quantities, such as particle velocity and temperature.

  13. Beautiful small: Misleading large randomized controlled trials? The example of colloids for volume resuscitation.

    PubMed

    Wiedermann, Christian J; Wiedermann, Wolfgang

    2015-01-01

    In anesthesia and intensive care, treatment benefits that were claimed on the basis of small or modest-sized trials have repeatedly failed to be confirmed in large randomized controlled trials. A well-designed small trial in a homogeneous patient population with high event rates could yield conclusive results; however, patient populations in anesthesia and intensive care are typically heterogeneous because of comorbidities. The size of the anticipated effects of therapeutic interventions is generally low in relation to relevant endpoints. For regulatory purposes, trials are required to demonstrate efficacy in clinically important endpoints, and therefore must be large because clinically important study endpoints such as death, sepsis, or pneumonia are dichotomous and infrequently occur. The rarer endpoint events occur in the study population; that is, the lower the signal-to-noise ratio, the larger the trials must be to prevent random events from being overemphasized. In addition to trial design, sample size determination on the basis of event rates, clinically meaningful risk ratio reductions and actual patient numbers studied are among the most important characteristics when interpreting study results. Trial size is a critical determinant of generalizability of study results to larger or general patient populations. Typical characteristics of small single-center studies responsible for their known fragility include low variability of outcome measures for surrogate parameters and selective publication and reporting. For anesthesiology and intensive care medicine, findings in volume resuscitation research on intravenous infusion of colloids exemplify this, since both the safety of albumin infusion and the adverse effects of the artificial colloid hydroxyethyl starch have been confirmed only in large-sized trials. PMID:26330723

  14. Beautiful small: Misleading large randomized controlled trials? The example of colloids for volume resuscitation

    PubMed Central

    Wiedermann, Christian J; Wiedermann, Wolfgang

    2015-01-01

    In anesthesia and intensive care, treatment benefits that were claimed on the basis of small or modest-sized trials have repeatedly failed to be confirmed in large randomized controlled trials. A well-designed small trial in a homogeneous patient population with high event rates could yield conclusive results; however, patient populations in anesthesia and intensive care are typically heterogeneous because of comorbidities. The size of the anticipated effects of therapeutic interventions is generally low in relation to relevant endpoints. For regulatory purposes, trials are required to demonstrate efficacy in clinically important endpoints, and therefore must be large because clinically important study endpoints such as death, sepsis, or pneumonia are dichotomous and infrequently occur. The rarer endpoint events occur in the study population; that is, the lower the signal-to-noise ratio, the larger the trials must be to prevent random events from being overemphasized. In addition to trial design, sample size determination on the basis of event rates, clinically meaningful risk ratio reductions and actual patient numbers studied are among the most important characteristics when interpreting study results. Trial size is a critical determinant of generalizability of study results to larger or general patient populations. Typical characteristics of small single-center studies responsible for their known fragility include low variability of outcome measures for surrogate parameters and selective publication and reporting. For anesthesiology and intensive care medicine, findings in volume resuscitation research on intravenous infusion of colloids exemplify this, since both the safety of albumin infusion and the adverse effects of the artificial colloid hydroxyethyl starch have been confirmed only in large-sized trials. PMID:26330723

  15. Analytical and Experimental Investigation of Mixing in Large Passive Containment Volumes

    SciTech Connect

    Per F. Peterson

    2002-10-17

    This final report details results from the past three years of the three-year UC Berkeley NEER investigation of mixing phenomena in large-scale passive reactor containments. We have completed all of our three-year deliverables specified in our proposal, as summarized for each deliverable in the body of this report, except for the experiments of steam condensation in the presence of noncondensable gas. We have particularly exiting results from the experiments studying the mixing in large insulated containment with a vertical cooling plate. These experiments now have shown why augmentation has been observed in wall-condensation experiments due to the momentum of the steam break-flow entering large volumes. More importantly, we also have shown that the forced-jet augmentation can be predicted using relatively simple correlations, and that it is independent of the break diameter and depends only on the break flow orientation, location, and momentum. This suggests that we will now be able to take credit for this augmentation in reactor safety analysis, improving safety margins for containment structures. We have finished the version 1 of 1-D Lagrangian flow and heat transfer code BMIX++. This version has ability to solve many complex stratified problems, such as multi-components problems, multi-enclosures problems (two enclosures connected by one connection for the current version), incompressible and compressible problems, multi jets, plumes, sinks in one enclosure problems, problems with wall conduction, and the combinations of the above problems. We believe the BMIX++ code is a very powerful computation tool to study stratified enclosures mixing problems.

  16. Comparing selected morphological models of hydrated Nafion using large scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Knox, Craig K.

    Experimental elucidation of the nanoscale structure of hydrated Nafion, the most popular polymer electrolyte or proton exchange membrane (PEM) to date, and its influence on macroscopic proton conductance is particularly challenging. While it is generally agreed that hydrated Nafion is organized into distinct hydrophilic domains or clusters within a hydrophobic matrix, the geometry and length scale of these domains continues to be debated. For example, at least half a dozen different domain shapes, ranging from spheres to cylinders, have been proposed based on experimental SAXS and SANS studies. Since the characteristic length scale of these domains is believed to be ˜2 to 5 nm, very large molecular dynamics (MD) simulations are needed to accurately probe the structure and morphology of these domains, especially their connectivity and percolation phenomena at varying water content. Using classical, all-atom MD with explicit hydronium ions, simulations have been performed to study the first-ever hydrated Nafion systems that are large enough (~2 million atoms in a ˜30 nm cell) to directly observe several hydrophilic domains at the molecular level. These systems consisted of six of the most significant and relevant morphological models of Nafion to-date: (1) the cluster-channel model of Gierke, (2) the parallel cylinder model of Schmidt-Rohr, (3) the local-order model of Dreyfus, (4) the lamellar model of Litt, (5) the rod network model of Kreuer, and (6) a 'random' model, commonly used in previous simulations, that does not directly assume any particular geometry, distribution, or morphology. These simulations revealed fast intercluster bridge formation and network percolation in all of the models. Sulfonates were found inside these bridges and played a significant role in percolation. Sulfonates also strongly aggregated around and inside clusters. Cluster surfaces were analyzed to study the hydrophilic-hydrophobic interface. Interfacial area and cluster volume

  17. Deflagration to detonation transition on large confined volume of lean hydrogen-air mixtures

    SciTech Connect

    Dorofeev, S.B.; Sidorov, V.P.; Dvoinishnikov, A.E.; Breitung, W.

    1996-01-01

    The results of large-scale experiments on turbulent flame propagation and transition to detonation in a confined volume of lean hydrogen-air mixtures are presented. The experiments were in a strong concrete enclosure of 480 m{sup 3}, and of 69.9 m length. The experimental volume consists first of a channel (34.6 m length, 2.3 m height, 2.5 m width) with or without obstacles, a canyon (10.55 {times} 6.3 {times} 2.5 m), and a final channel. Ignition was with a weak electric spark at the beginning of the first channel. The effect of hydrogen concentration (9.8%--14% vol.) on turbulent flame propagation and transition to detonation was studied. The obstacle configuration in the first channel (blockage ratio 0.3, 0.6, and no obstacles), exit cross section to the canyon (1.4, 2, and 5.6 m{sup 2}), and vent area at the end (0, 2.5, and 4 m{sup 2}) were varied in the tests. Details of turbulent flame propagation, of pressure field, and of detonation onset are presented. A minimum of 12.5% of hydrogen was found to be necessary for transition to detonation. This is a much less sensitive mixture than those in which the onset of spontaneous detonation has previously been observed (minimum of 15% of hydrogen in air). The effect of scale on the onset conditions for spontaneous detonation is discussed. The characteristic geometrical size of the mixture for transition to detonation is shown to be strongly related to the mixture sensitivity.

  18. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  19. Manufacturing Process Simulation of Large-Scale Cryotanks

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Phillips, Steven; Griffin, Brian; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA's Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aid in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI.

  20. SimScience: Interactive educational modules based on large simulations

    NASA Astrophysics Data System (ADS)

    Warner, Simeon; Catterall, Simon; Gregory, Eric; Lipson, Edward

    2000-05-01

    SimScience is a collaboration between Cornell University and Syracuse University. It comprises four interactive educational modules on crack propagation, crackling noise, fluid flow, and membranes. Computer simulations are at the forefront of current research in all of these topics. Our aim is explain some elements of each subject and to show the relevance of computer simulations. The crack propagation module explores the mechanisms of dam failure. The crackling noise module uses everyday sounds to illustrate types of noise, and links this to noise created by jumps in magnetization processes. The fluid flow module describes various properties of flows and explains phenomena such as a curve ball in baseball. The membranes module leverages everyday experience with membranes such as soap bubbles to help explain biological membranes and the relevance of membranes to theories of gravity. We have used Java not only to produce small-scale versions of research simulations but also to provide models illustrating simpler concepts underlying the main subject matter. Web technology allows us to deliver SimScience both over the Internet and on CD-ROM. To accommodate a target audience spanning K-12 and university general science students, we have created three levels for each module. Efforts are underway to assess the SimScience modules with the help of teachers and students.