Science.gov

Sample records for large volume simulations

  1. Large Eddy Simulations of Volume Restriction Effects on Canopy-Induced Increased-Uplift Regions

    NASA Astrophysics Data System (ADS)

    Chatziefstratiou, E.; Bohrer, G.; Velissariou, V.

    2012-12-01

    ABSTRACT Previous modeling and empirical work have shown the development of important areas of increased uplift past forward-facing steps, and recirculation zones past backward-facing steps. Forests edges represent a special kind of step - a semi-porous one. Current models of the effects of forest edges on the flow represent the forest with a prescribed drag term and does not account for the effects of the solid volume in the forest that restrict the airflow. The RAMS-based Forest Large Eddy Simulation (RAFLES) resolves flows inside and above forested canopies. RAFLES is spatially explicit, and uses the finite volume method to solve a descretized set of Navier-Stokes equations. It accounts for vegetation drag effects on the flow and on the flux exchange between the canopy and the canopy air, proportional to the local leaf density. For a better representation of the vegetation structure in the numerical grid within the canopy sub-domain, the model uses a modified version of the cut cell coordinate system. The hard volume of vegetation elements, in forests, or buildings, in urban environments, within each numerical grid cell is represented via a sub-grid-scale process that shrinks the open apertures between grid cells and reduces the open cell volume. We used RAFLES to simulate the effects of a canopy of varying foliage and stem densities on flow over virtual qube-shaped barriers under neutrally buoyant conditions. We explicitly tested the effects of the numerical representation of volume restriction, independent of the effects of the leaf drag by comparing drag-only simulations, where we prescribed no volume or aperture restriction to the flow, restriction-only simulations, where we prescribed no drag, and control simulations, where both drag and volume plus aperture restriction were included. Our simulations show that representation of the effects of the volume and aperture restriction due to obstacles to flow is important (figure 1) and leads to differences in the

  2. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  3. Determination of the large scale volume weighted halo velocity bias in simulations

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-06-01

    A profound assumption in peculiar velocity cosmology is bv=1 at sufficiently large scales, where bv is the volume-weighted halo(galaxy) velocity bias with respect to the matter velocity field. However, this fundamental assumption has not been robustly verified in numerical simulations. Furthermore, it is challenged by structure formation theory (Bardeen, Bond, Kaiser and Szalay, Astrophys. J. 304, 15 (1986); Desjacques and Sheth, Phys. Rev D 81, 023526 (2010), which predicts the existence of velocity bias (at least for proto-halos) due to the fact that halos reside in special regions (local density peaks). The major obstacle to measuring the volume-weighted velocity from N-body simulations is an unphysical sampling artifact. It is entangled in the measured velocity statistics and becomes significant for sparse populations. With recently improved understanding of the sampling artifact (Zhang, Zheng and Jing, 2015, PRD; Zheng, Zhang and Jing, 2015, PRD), for the first time we are able to appropriately correct this sampling artifact and then robustly measure the volume-weighted halo velocity bias. (1) We verify bv=1 within 2% model uncertainty at k ≲0.1 h /Mpc and z =0 - 2 for halos of mass ˜1012- 1013h-1M⊙ and, therefore, consolidate a foundation for the peculiar velocity cosmology. (2) We also find statistically significant signs of bv≠1 at k ≳0.1 h /Mpc . Unfortunately, whether this is real or caused by a residual sampling artifact requires further investigation. Nevertheless, cosmology based on the k ≳0.1 h /Mpc velocity data should be careful with this potential velocity bias.

  4. Measurements of Elastic and Inelastic Properties under Simulated Earth's Mantle Conditions in Large Volume Apparatus

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.

    2012-12-01

    The interpretation of highly resolved seismic data from Earths deep interior require measurements of the physical properties of Earth's materials under experimental simulated mantle conditions. More than decade ago seismic tomography clearly showed subduction of crustal material can reach the core mantle boundary under specific circumstances. That means there is no longer space for the assumption deep mantle rocks might be much less complex than deep crustal rocks known from exhumation processes. Considering this geophysical high pressure research is faced the challenge to increase pressure and sample volume at the same time to be able to perform in situ experiments with representative complex samples. High performance multi anvil devices using novel materials are the most promising technique for this exciting task. Recent large volume presses provide sample volumes 3 to 7 orders of magnitude bigger than in diamond anvil cells far beyond transition zone conditions. The sample size of several cubic millimeters allows elastic wave frequencies in the low to medium MHz range. Together with the small and even adjustable temperature gradients over the whole sample this technique makes anisotropy and grain boundary effects in complex systems accessible for elastic and inelastic properties measurements in principle. The measurements of both elastic wave velocities have also no limits for opaque and encapsulated samples. The application of triple-mode transducers and the data transfer function technique for the ultrasonic interferometry reduces the time for saving the data during the experiment to about a minute or less. That makes real transient measurements under non-equilibrium conditions possible. A further benefit is, both elastic wave velocities are measured exactly simultaneously. Ultrasonic interferometry necessarily requires in situ sample deformation measurement by X-radiography. Time-resolved X-radiography makes in situ falling sphere viscosimetry and even the

  5. Inclusion of fluid-solid interaction in Volume of Fluid to simulate spreading and dewetting for large contact angles

    NASA Astrophysics Data System (ADS)

    Mahady, Kyle; Afkhami, Shahriar; Kondic, Lou

    2014-11-01

    The van der Waals (vdW) interaction between molecules is of fundamental importance in determining the behavior of three phase systems in fluid mechanics. This interaction gives rise to interfacial energies, and thus the contact angle for a droplet on a solid surface, and additionally leads to instability of very thin liquid films. We develop a hybrid method for including a Lennard-Jones type vdW interaction in a finite volume, Volume of Fluid (VoF) based solver for the full two-phase Navier-Stokes equations. This method includes the full interaction between each fluid phase and the solid substrate via a finite-volume approximation of the vdW body force. Our work is distinguished from conventional VoF based implementations in that the contact angle arises from simulation of the underlying physics, as well as successfully treating vdW induced film rupture. At the same time, it avoids the simplifications of calculations based on disjoining-pressure, where the vdW interaction is included as a pressure jump across the interface which is derived under the assumption of a flat film. This is especially relevant in the simulation of nanoscale film ruptures involving large contact angles, which have been studied recently in the context of bottom-up nanoparticle fabrication. This work is partially supported by the Grants NSF DMS-1320037 and CBET-1235710.

  6. Resolving the Effects of Aperture and Volume Restriction of the Flow by Semi-Porous Barriers Using Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Chatziefstratiou, Efthalia K.; Velissariou, Vasilia; Bohrer, Gil

    2014-09-01

    The Regional Atmospheric Modelling System (RAMS)-based Forest Large-Eddy Simulation (RAFLES) model is used to simulate the effects of large rectangular prism-shaped semi-porous barriers of varying densities under neutrally buoyant conditions. RAFLES model resolves flows inside and above forested canopies and other semi-porous barriers, and it accounts for barrier-induced drag on the flow and surface flux exchange between the barrier and the air. Unlike most other models, RAFLES model also accounts for the barrier-induced volume and aperture restriction via a modified version of the cut-cell coordinate system. We explicitly tested the effects of the numerical representation of volume restriction, independent of the effects of the drag, by comparing drag-only simulations (where we prescribed neither volume nor aperture restrictions to the flow), restriction-only simulations (where we prescribed no drag), and control simulations where both drag and volume plus aperture restrictions were included. Previous modelling and empirical work have revealed the development of important areas of increased uplift upwind of forward-facing steps, and recirculation zones downwind of backward-facing steps. Our simulations show that representation of the effects of the volume and aperture restriction due to the presence of semi-porous barriers leads to differences in the strengths and locations of increased-updraft and recirculation zones, and the length and strength of impact and adjustment zones when compared to simulation solutions with a drag-only representation. These are mostly driven by differences to the momentum budget of the streamwise wind velocity by resolved turbulence and pressure gradient fields around the front and back edges of the barrier. We propose that volume plus aperture restriction is an important component of the flow system in semi-porous environments such as forests and cities and should be considered by large-eddy simulation (LES).

  7. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  8. Gyrokinetic large eddy simulations

    SciTech Connect

    Morel, P.; Navarro, A. Banon; Albrecht-Marc, M.; Carati, D.; Merz, F.; Goerler, T.; Jenko, F.

    2011-07-15

    The large eddy simulation approach is adapted to the study of plasma microturbulence in a fully three-dimensional gyrokinetic system. Ion temperature gradient driven turbulence is studied with the GENE code for both a standard resolution and a reduced resolution with a model for the sub-grid scale turbulence. A simple dissipative model for representing the effect of the sub-grid scales on the resolved scales is proposed and tested. Once calibrated, the model appears to be able to reproduce most of the features of the free energy spectra for various values of the ion temperature gradient.

  9. Large-eddy simulations of 3D Taylor-Green vortex: comparison of Smoothed Particle Hydrodynamics, Lattice Boltzmann and Finite Volume methods

    NASA Astrophysics Data System (ADS)

    Kajzer, A.; Pozorski, J.; Szewc, K.

    2014-08-01

    In the paper we present Large-eddy simulation (LES) results of 3D Taylor- Green vortex obtained by the three different computational approaches: Smoothed Particle Hydrodynamics (SPH), Lattice Boltzmann Method (LBM) and Finite Volume Method (FVM). The Smagorinsky model was chosen as a subgrid-scale closure in LES for all considered methods and a selection of spatial resolutions have been investigated. The SPH and LBM computations have been carried out with the use of the in-house codes executed on GPU and compared, for validation purposes, with the FVM results obtained using the open-source CFD software OpenFOAM. A comparative study in terms of one-point statistics and turbulent energy spectra shows a good agreement of LES results for all methods. An analysis of the GPU code efficiency and implementation difficulties has been made. It is shown that both SPH and LBM may offer a significant advantage over mesh-based CFD methods.

  10. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  11. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  12. Large volume manufacture of dymalloy

    SciTech Connect

    1998-06-22

    The purpose of this research was to test the commercial viability and feasibility of Dymalloy, a composite material to measure thermal conductivity. Dymalloy was developed as part of a CRADA with Sun Microsystems. Sun Microsystems was a potential end user of Dymalloy as a substrate for MCMS. Sun had no desire to be involved in the manufacture of this material. The goal of this small business CRADA with Spectra Mat was to establish the high volume commercial manufacturing industry source for Dymalloy required by an end-user such as Sun Microsystems. The difference between the fabrication technique developed during the CRADA and this proposed work related to the mechanical technique of coating the diamond powder. Mechanical parts for the high-volume diamond powder coating process existed; however, they needed to be installed in an existing coating system for evaluation. Sputtering systems similar to the one required for this project were available at LLNL. Once the diamond powder was coated, both LLNL and Spectra Mat could make and test the Dymalloy composites. Spectra Mat manufactured Dymalloy composites in order to evaluate and establish a reasonable cost estimate on their existing processing capabilities. This information was used by Spectra Mat to define the market and cost-competitive products that could be commercialized from this new substrate material.

  13. Applied large eddy simulation.

    PubMed

    Tucker, Paul G; Lardeau, Sylvain

    2009-07-28

    Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity. PMID:19531503

  14. Large Eddy Simulation of Bubbly Flow and Slag Layer Behavior in Ladle with Discrete Phase Model (DPM)-Volume of Fluid (VOF) Coupled Model

    NASA Astrophysics Data System (ADS)

    Li, Linmin; Liu, Zhongqiu; Cao, Maoxue; Li, Baokuan

    2015-07-01

    In the ladle metallurgy process, the bubble movement and slag layer behavior is very important to the refining process and steel quality. For the bubble-liquid flow, bubble movement plays a significant role in the phase structure and causes the unsteady complex turbulent flow pattern. This is one of the most crucial shortcomings of the current two-fluid models. In the current work, a one-third scale water model is established to investigate the bubble movement and the slag open-eye formation. A new mathematical model using the large eddy simulation (LES) is developed for the bubble-liquid-slag-air four-phase flow in the ladle. The Eulerian volume of fluid (VOF) model is used for tracking the liquid-slag-air free surfaces and the Lagrangian discrete phase model (DPM) is used for describing the bubble movement. The turbulent liquid flow is induced by bubble-liquid interactions and is solved by LES. The procedure of bubble coming out of the liquid and getting into the air is modeled using a user-defined function. The results show that the present LES-DPM-VOF coupled model is good at predicting the unsteady bubble movement, slag eye formation, interface fluctuation, and slag entrainment.

  15. Volume Rendering of AMR Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Pomarède, D.; Chapon, D.; Teyssier, R.; Bournaud, F.; Renaud, F.; Grandjouan, N.

    2013-04-01

    High-resolution simulations often rely on the Adaptive Mesh Resolution (AMR) technique to optimize memory consumption versus attainable precision. While this technique allows for dramatic improvements in terms of computing performance, the analysis and visualization of its data outputs remain challenging. The lack of effective volume renderers for the octree-based AMR used by the RAMSES simulation program has led to the development of the solutions presented in this paper. Two custom algorithms are discussed, based on the splatting and the ray-casting techniques. Their usage is illustrated in the context of the visualization of a high-resolution, 6000-processor simulation of a Milky Way-like galaxy. Performance obtained in terms of memory management and parallelism speedup are presented.

  16. LARGE BUILDING HVAC SIMULATION

    EPA Science Inventory

    The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...

  17. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming. PMID:17080865

  18. Large volumes and spectroscopy of walking theories

    NASA Astrophysics Data System (ADS)

    Del Debbio, L.; Lucini, B.; Patella, A.; Pica, C.; Rago, A.

    2016-03-01

    A detailed investigation of finite-size effects is performed for SU(2) gauge theory with two fermions in the adjoint representation, which previous lattice studies have shown to be inside the conformal window. The system is investigated with different spatial and temporal boundary conditions on lattices of various spatial and temporal extensions, for two values of the bare fermion mass representing a heavy and light fermion regime. Our study shows that the infinite-volume limit of masses and decay constants in the mesonic sector is reached only when the mass of the pseudoscalar particle MPS and the spatial lattice size L satisfy the relation L MPS≥15 . This bound, which is at least a factor of three higher than what is observed in QCD, is a likely consequence of the different spectral signatures of the two theories, with the scalar isosinglet (0++ glueball) being the lightest particle in our model. In addition to stressing the importance of simulating large lattice sizes, our analysis emphasizes the need to understand quantitatively the full spectrum of the theory rather than just the spectrum in the mesonic isotriplet sector. While for the lightest fermion measuring masses from gluonic operators proves to be still challenging, reliable results for glueball states are obtained at the largest fermion mass and, in the mesonic sector, for both fermion masses. As a byproduct of our investigation, we perform a finite-size scaling of the pseudoscalar mass and decay constant. The data presented in this work support the conformal behavior of this theory with an anomalous dimension γ*≃0.37 .

  19. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  20. Large volume axionic Swiss cheese inflation

    NASA Astrophysics Data System (ADS)

    Misra, Aalok; Shukla, Pramod

    2008-09-01

    Continuing with the ideas of (Section 4 of) [A. Misra, P. Shukla, Moduli stabilization, large-volume dS minimum without anti-D3-branes, (non-)supersymmetric black hole attractors and two-parameter Swiss cheese Calabi Yau's, arXiv: 0707.0105 [hep-th], Nucl. Phys. B, in press], after inclusion of perturbative and non-perturbative α corrections to the Kähler potential and (D1- and D3-) instanton generated superpotential, we show the possibility of slow roll axionic inflation in the large volume limit of Swiss cheese Calabi Yau orientifold compactifications of type IIB string theory. We also include one- and two-loop corrections to the Kähler potential but find the same to be subdominant to the (perturbative and non-perturbative) α corrections. The NS NS axions provide a flat direction for slow roll inflation to proceed from a saddle point to the nearest dS minimum.

  1. Large volume flow-through scintillating detector

    DOEpatents

    Gritzo, Russ E.; Fowler, Malcolm M.

    1995-01-01

    A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

  2. Large mode-volume, large beta, photonic crystal laser resonator

    SciTech Connect

    Dezfouli, Mohsen Kamandar; Dignam, Marc M.

    2014-12-15

    We propose an optical resonator formed from the coupling of 13, L2 defects in a triangular-lattice photonic crystal slab. Using a tight-binding formalism, we optimized the coupled-defect cavity design to obtain a resonator with predicted single-mode operation, a mode volume five times that of an L2-cavity mode and a beta factor of 0.39. The results are confirmed using finite-difference time domain simulations. This resonator is very promising for use as a single mode photonic crystal vertical-cavity surface-emitting laser with high saturation output power compared to a laser consisting of one of the single-defect cavities.

  3. Mesoscale Ocean Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank

    2015-11-01

    The highest resolution global climate models (GCMs) can now resolve the largest scales of mesoscale dynamics in the ocean. This has the potential to increase the fidelity of GCMs. However, the effects of the smallest, unresolved, scales of mesoscale dynamics must still be parametrized. One such family of parametrizations are mesoscale ocean large eddy simulations (MOLES), but the effects of including MOLES in a GCM are not well understood. In this presentation, several MOLES schemes are implemented in a mesoscale-resolving GCM (CESM), and the resulting flow is compared with that produced by more traditional sub-grid parametrizations. Large eddy simulation (LES) is used to simulate flows where the largest scales of turbulent motion are resolved, but the smallest scales are not resolved. LES has traditionally been used to study 3D turbulence, but recently it has also been applied to idealized 2D and quasi-geostrophic (QG) turbulence. The MOLES presented here are based on 2D and QG LES schemes.

  4. LARGE volume string compactifications at finite temperature

    NASA Astrophysics Data System (ADS)

    Anguelova, Lilia; Calò, Vincenzo; Cicoli, Michele

    2009-10-01

    We present a detailed study of the finite-temperature behaviour of the LARGE Volume type IIB flux compactifications. We show that certain moduli can thermalise at high temperatures. Despite that, their contribution to the finite-temperature effective potential is always negligible and the latter has a runaway behaviour. We compute the maximal temperature Tmax, above which the internal space decompactifies, as well as the temperature T*, that is reached after the decay of the heaviest moduli. The natural constraint T* < Tmax implies a lower bound on the allowed values of the internal volume Script V. We find that this restriction rules out a significant range of values corresponding to smaller volumes of the order Script V ~ 104ls6, which lead to standard GUT theories. Instead, the bound favours values of the order Script V ~ 1015ls6, which lead to TeV scale SUSY desirable for solving the hierarchy problem. Moreover, our result favours low-energy inflationary scenarios with density perturbations generated by a field, which is not the inflaton. In such a scenario, one could achieve both inflation and TeV-scale SUSY, although gravity waves would not be observable. Finally, we pose a two-fold challenge for the solution of the cosmological moduli problem. First, we show that the heavy moduli decay before they can begin to dominate the energy density of the Universe. Hence they are not able to dilute any unwanted relics. And second, we argue that, in order to obtain thermal inflation in the closed string moduli sector, one needs to go beyond the present EFT description.

  5. SUSY's Ladder: reframing sequestering at Large Volume

    NASA Astrophysics Data System (ADS)

    Reece, Matthew; Xue, Wei

    2016-04-01

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague other supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. This gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.

  6. Comments on large-N volume independence

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2010-06-02

    We study aspects of the large-N volume independence on R{sup 3} X L{sup {Gamma}}, where L{sup {Gamma}} is a {Gamma}site lattice for Yang-Mills theory with adjoint Wilson-fermions. We find the critical number of lattice sites above which the center-symmetry analysis on L{sup {Gamma}} agrees with the one on the continuum S{sup 1}. For Wilson parameter set to one and {Gamma}{>=}2, the two analyses agree. One-loop radiative corrections to Wilson-line masses are finite, reminiscent of the UV-insensitivity of the Higgs mass in deconstruction/Little-Higgs theories. Even for theories with {Gamma}=1, volume independence in QCD(adj) may be guaranteed to work by tuning one low-energy effective field theory parameter. Within the parameter space of the theory, at most three operators of the 3d effective field theory exhibit one-loop UV-sensitivity. This opens the analytical prospect to study 4d non-perturbative physics by using lower dimensional field theories (d=3, in our example).

  7. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  8. A new large-volume multianvil system

    NASA Astrophysics Data System (ADS)

    Frost, D. J.; Poe, B. T.; Trønnes, R. G.; Liebske, C.; Duba, A.; Rubie, D. C.

    2004-06-01

    A scaled-up version of the 6-8 Kwai-type multianvil apparatus has been developed at the Bayerisches Geoinstitut for operation over ranges of pressure and temperature attainable in conventional systems but with much larger sample volumes. This split-cylinder multianvil system is used with a hydraulic press that can generate loads of up to 5000 t (50 MN). The six tool-steel outer-anvils define a cubic cavity of 100 mm edge-length in which eight 54 mm tungsten carbide cubic inner-anvils are compressed. Experiments are performed using Cr 2O 3-doped MgO octahedra and pyrophyllite gaskets. Pressure calibrations at room temperature and high temperature have been performed with 14/8, 18/8, 18/11, 25/17 and 25/15 OEL/TEL (octahedral edge-length/anvil truncation edge-length, in millimetre) configurations. All configurations tested reach a limiting plateau where the sample-pressure no longer increases with applied load. Calibrations with different configurations show that greater sample-pressure efficiency can be achieved by increasing the OEL/TEL ratio. With the 18/8 configuration the GaP transition is reached at a load of 2500 t whereas using the 14/8 assembly this pressure cannot be reached even at substantially higher loads. With an applied load of 2000 t the 18/8 can produce MgSiO 3 perovskite at 1900 °C with a sample volume of ˜20 mm 3, compared with <3 mm 3 in conventional multianvil systems at the same conditions. The large octahedron size and use of a stepped LaCrO 3 heater also results in significantly lower thermal gradients over the sample.

  9. Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces

    SciTech Connect

    Makarov, A. N.

    2015-09-15

    Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.

  10. Large area pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Kruer, Mark A. (Inventor)

    1999-01-01

    An advanced solar simulator illuminates the surface a very large solar array, such as one twenty feet by twenty feet in area, from a distance of about twenty-six feet with an essentially uniform intensity field of pulsed light of an intensity of one AMO, enabling the solar array to be efficiently tested with light that emulates the sun. Light modifiers sculpt a portion of the light generated by an electrically powered high power Xenon lamp and together with direct light from the lamp provide uniform intensity illumination throughout the solar array, compensating for the square law and cosine law reduction in direct light intensity, particularly at the corner locations of the array. At any location within the array the sum of the direct light and reflected light is essentially constant.

  11. Lagrangian volume deformations around simulated galaxies

    NASA Astrophysics Data System (ADS)

    Robles, S.; Domínguez-Tenreiro, R.; Oñorbe, J.; Martínez-Serrano, F. J.

    2015-07-01

    We present a detailed analysis of the local evolution of 206 Lagrangian Volumes (LVs) selected at high redshift around galaxy seeds, identified in a large-volume Λ cold dark matter (ΛCDM) hydrodynamical simulation. The LVs have a mass range of 1-1500 × 1010 M⊙. We follow the dynamical evolution of the density field inside these initially spherical LVs from z = 10 up to zlow = 0.05, witnessing highly non-linear, anisotropic mass rearrangements within them, leading to the emergence of the local cosmic web (CW). These mass arrangements have been analysed in terms of the reduced inertia tensor I_{ij}^r, focusing on the evolution of the principal axes of inertia and their corresponding eigendirections, and paying particular attention to the times when the evolution of these two structural elements declines. In addition, mass and component effects along this process have also been investigated. We have found that deformations are led by dark matter dynamics and they transform most of the initially spherical LVs into prolate shapes, i.e. filamentary structures. An analysis of the individual freezing-out time distributions for shapes and eigendirections shows that first most of the LVs fix their three axes of symmetry (like a skeleton) early on, while accretion flows towards them still continue. Very remarkably, we have found that more massive LVs fix their skeleton earlier on than less massive ones. We briefly discuss the astrophysical implications our findings could have, including the galaxy mass-morphology relation and the effects on the galaxy-galaxy merger parameter space, among others.

  12. Finite volume hydromechanical simulation in porous media

    PubMed Central

    Nordbotten, Jan Martin

    2014-01-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media. PMID:25574061

  13. Finite volume hydromechanical simulation in porous media

    NASA Astrophysics Data System (ADS)

    Nordbotten, Jan Martin

    2014-05-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media.

  14. Changes in leg volume during microgravity simulation

    NASA Technical Reports Server (NTRS)

    Thornton, William E.; Hedge, Vickie; Coleman, Eugene; Uri, John J.; Moore, Thomas P.

    1992-01-01

    Little published information exists regarding the magnitude and time course of cephalad fluid shift resulting from microgravity simulations. Six subjects were exposed to 150 min each at horizontal bed rest, 6-deg head-down tilt, and horizontal water immersion. Fluid shift was estimated by calculating leg volumes from eight serial girth measurements from groin to ankle before, during, and after exposure. Results were compared with data from the first 3 h of spacecraft. By the end of exposure, total leg volume for the six subjects decreased by 2.6 +/- 0.8 percent, 1.7 +/- 1.2 percent, and 4.0 +/- 1.6 percent for horizontal, head-down, and immersion, respectively. Changes had plateaued for horizontal and head-down and had slowed for immersion. Relatively more fluid was lost from the lower leg than the thigh for all three conditions, particularly head-down. During the first 3 h of spaceflight, total leg volume decreased by 8.6 percent, and relatively more fluid was lost from the thigh than the lower leg. The difference in volume changes in microgravity and simulated microgravity may be caused by the small transverse pressures still present in ground-based simulations and the extremely nonlinear compliance of tissue.

  15. Analysis of volume holographic storage allowing large-angle illumination

    NASA Astrophysics Data System (ADS)

    Shamir, Joseph

    2005-05-01

    Advanced technological developments have stimulated renewed interest in volume holography for applications such as information storage and wavelength multiplexing for communications and laser beam shaping. In these and many other applications, the information-carrying wave fronts usually possess narrow spatial-frequency bands, although they may propagate at large angles with respect to each other or a preferred optical axis. Conventional analytic methods are not capable of properly analyzing the optical architectures involved. For mitigation of the analytic difficulties, a novel approximation is introduced to treat narrow spatial-frequency band wave fronts propagating at large angles. This approximation is incorporated into the analysis of volume holography based on a plane-wave decomposition and Fourier analysis. As a result of the analysis, the recently introduced generalized Bragg selectivity is rederived for this more general case and is shown to provide enhanced performance for the above indicated applications. The power of the new theoretical description is demonstrated with the help of specific examples and computer simulations. The simulations reveal some interesting effects, such as coherent motion blur, that were predicted in an earlier publication.

  16. Large-volume sampling and preconcentration for trace explosives detection.

    SciTech Connect

    Linker, Kevin Lane

    2004-05-01

    A trace explosives detection system typically contains three subsystems: sample collection, preconcentration, and detection. Sample collection of trace explosives (vapor and particulate) through large volumes of airflow helps reduce sampling time while increasing the amount of dilute sample collected. Preconcentration of the collected sample before introduction into the detector improves the sensitivity of the detector because of the increase in sample concentration. By combining large-volume sample collection and preconcentration, an improvement in the detection of explosives is possible. Large-volume sampling and preconcentration is presented using a systems level approach. In addition, the engineering of large-volume sampling and preconcentration for the trace detection of explosives is explained.

  17. Large space systems technology, 1980, volume 1

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    The technological and developmental efforts in support of the large space systems technology are described. Three major areas of interests are emphasized: (1) technology pertient to large antenna systems; (2) technology related to large space systems; and (3) activities that support both antenna and platform systems.

  18. Simulation of hydrodynamics using large eddy simulation-second-order moment model in circulating fluidized beds

    NASA Astrophysics Data System (ADS)

    Juhui, Chen; Yanjia, Tang; Dan, Li; Pengfei, Xu; Huilin, Lu

    2013-07-01

    Flow behavior of gas and particles is predicted by the large eddy simulation of gas-second order moment of solid model (LES-SOM model) in the simulation of flow behavior in CFB. This study shows that the simulated solid volume fractions along height using a two-dimensional model are in agreement with experiments. The velocity, volume fraction and second-order moments of particles are computed. The second-order moments of clusters are calculated. The solid volume fraction, velocity and second order moments are compared at the three different model constants.

  19. Technologies for imaging neural activity in large volumes.

    PubMed

    Ji, Na; Freeman, Jeremy; Smith, Spencer L

    2016-08-26

    Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Conventional microscopy collects data from individual planes and cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point-spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for processing and analyzing volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics and helping elucidate how brain regions work in concert to support behavior. PMID:27571194

  20. Large Volume Injection Techniques in Capillary Gas Chromatography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Large volume injection (LVI) is a prerequisite of modern gas chromatographic (GC) analysis, especially when trace sample components have to be determined at very low concentration levels. Injection of larger than usual sample volumes increases sensitivity and/or reduces (or even eliminates) the need...

  1. Large volume continuous counterflow dialyzer has high efficiency

    NASA Technical Reports Server (NTRS)

    Mandeles, S.; Woods, E. C.

    1967-01-01

    Dialyzer separates macromolecules from small molecules in large volumes of solution. It takes advantage of the high area/volume ratio in commercially available 1/4-inch dialysis tubing and maintains a high concentration gradient at the dialyzing surface by counterflow.

  2. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  3. Large-Volume Gravid Traps Enhance Collection of Culex Vectors.

    PubMed

    Popko, David A; Walton, William E

    2016-06-01

    Gravid mosquito collections were compared among several large-volume (infusion volume ≥35 liters) gravid trap designs and the small-volume (infusion volume  =  6 liters) Centers for Disease Control and Prevention (CDC) gravid trap used routinely by vector control districts for vector and pathogen surveillance. The numbers of gravid Culex quinquefasciatus, Cx. tarsalis, and Cx. stigmatosoma collected by large gravid traps were greater than by the CDC gravid trap during nearly all overnight trials. Large-volume gravid traps collected on average 6.6-fold more adult female Culex mosquitoes compared to small-volume CDC gravid traps across 3 seasons during the 3 years of the studies. The differences in gravid mosquito collections between large-versus small-volume gravid traps were greatest during spring, when 8- to 56-fold more Culex individuals were collected using large-volume gravid traps. The proportion of gravid females in collections did not differ appreciably among the more effective trap designs tested. Important determinants of gravid trap performance were infusion container size and type as well as infusion volume, which determined the distance between the suction trap and the infusion surface. Of lesser importance for gravid trap performance were the number of suction traps, method of suction trap mounting, and infusion concentration. Fermentation of infusions between 1 and 4 wk weakly affected total mosquito collections, with Cx. stigmatosoma collections moderately enhanced by comparatively young and organically enriched infusions. A suction trap mounted above 100 liters of organic infusion housed in a 121-liter black plastic container collected the most gravid mosquitoes over the greatest range of experimental conditions, and a 35-liter infusion with side-mounted suction traps was a promising lesser-volume alternative design. PMID:27280347

  4. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  5. Large-signal klystron simulations using KLSC

    SciTech Connect

    Carlsten, B.E.; Ferguson, P.

    1997-10-01

    The authors describe large-signal klystron simulations using the particle-in-cell code KLSC. This code uses the induced-current model to describe the steady-state cavity modulations and resulting rf fields, and advances the space-charge fields through maxwell`s equations. In this paper, an eight-cavity, high-power S-band klystron simulation is used to highlight various aspects of this simulation technique. In particular, there are specific issues associated with modeling the input cavity, the gain circuit, and the large-signal circuit (including the output cavities), that have to be treated carefully.

  6. Cosmological moduli problem in large volume scenario and thermal inflation

    SciTech Connect

    Choi, Kiwoon; Park, Wan-Il; Shin, Chang Sub E-mail: wipark@kias.re.kr

    2013-03-01

    We show that in a large volume scenario of type IIB string or F-theory compactifications, single thermal inflation provides only a partial solution to the cosmological problem of the light volume modulus. We then clarify the conditions for double thermal inflation, being a simple extension of the usual single thermal inflation scenario, to solve the cosmological moduli problem in the case of relatively light moduli masses. Using a specific example, we demonstrate that double thermal inflation can be realized in large volume scenario in a natural manner, and the problem of the light volume modulus can be solved for the whole relevant mass range. We also find that right amount of baryon asymmetry and dark matter can be obtained via a late-time Affleck-Dine mechanism and the decays of the visible sector NLSP to flatino LSP.

  7. New Large Volume Press Beamlines at the Canadian Light Source

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.; Hormes, J.; Lauterjung, J.; Secco, R.; Hallin, E.

    2013-12-01

    The Canadian Light Source, the German Research Centre for Geosciences and the Western University recently agreed to establish two new large volume press beamlines at the Canadian Lightsource. As the first step a 250 tons DIA-LVP will be installed at the IDEAS beamline in 2014. The further development is associated with the construction of a superconducting wiggler beamline at the Brockhouse sector. A 1750 tons DIA LVP will be installed there about 2 years later. Up to the completion of this wiggler beamline the big press will be used for offline high pressure high temperature experiments under simulated Earth's mantle conditions. In addition to X-ray diffraction, all up-to-date high pressure techniques as ultrasonic interferometry, deformation analyses by X-radiography, X-ray densitometry, falling sphere viscosimetry, multi-staging etc. will be available at both beamlines. After the required commissioning the beamlines will be open to the worldwide user community from Geosciences, general material sciences, physics, chemistry, biology etc. based on the evaluation and ranking of the submitted user proposals by an international review panel.

  8. Large scale simulations of bidisperse emulsions and foams

    NASA Astrophysics Data System (ADS)

    Metsi, Efimia

    Emulsions and foams are of fundamental importance in a wide variety of industrial and natural processes. The macroscopic properties of these multiphase systems are determined by the viscous and interfacial interactions on the microscopic level. In previous research efforts, the realism of computer simulations has been limited by the cost of the computational algorithms which scale as O(N2), where N is the number of droplets. In our research, we have developed a novel, fast and efficient algorithm which scales as [O( N1n(N)]. The algorithm has been implemented to simulate the low Reynolds number flow of large-scale systems of monodisperse and bidisperse droplet suspensions. A comprehensive study has been performed to examine the effective viscosity of these systems as a function of the overall volume fraction, volume fraction of small droplets, Capillary number and droplet size ratio. Monodisperse systems exhibit disorder-order transitions at high volume fractions and low Capillary numbers. Bidisperse systems show a tendency toward cluster formation with small droplets interspersed among large droplets. To determine if the cluster formation leads to phase separation, simulations have been performed with the two droplet species arranged in ordered layers. It is found that the initial layers are destroyed, and the two phases mix, yielding clusters of small and large droplets. The mixing of the two phases and the cluster formation are investigated through linear and radial pairwise distribution functions of the two droplet species.

  9. Indian LSSC (Large Space Simulation Chamber) facility

    NASA Technical Reports Server (NTRS)

    Brar, A. S.; Prasadarao, V. S.; Gambhir, R. D.; Chandramouli, M.

    1988-01-01

    The Indian Space Agency has undertaken a major project to acquire in-house capability for thermal and vacuum testing of large satellites. This Large Space Simulation Chamber (LSSC) facility will be located in Bangalore and is to be operational in 1989. The facility is capable of providing 4 meter diameter solar simulation with provision to expand to 4.5 meter diameter at a later date. With such provisions as controlled variations of shroud temperatures and availability of infrared equipment as alternative sources of thermal radiation, this facility will be amongst the finest anywhere. The major design concept and major aspects of the LSSC facility are presented here.

  10. Molecular dynamics simulations of large macromolecular complexes

    PubMed Central

    Perilla, Juan R.; Goh, Boon Chong; Cassidy, C. Keith; Liu, Bo; Bernardi, Rafael C.; Rudack, Till; Yu, Hang; Wu, Zhe; Schulten, Klaus

    2015-01-01

    Connecting dynamics to structural data from diverse experimental sources, molecular dynamics simulations permit the exploration of biological phenomena in unparalleled detail. Advances in simulations are moving the atomic resolution descriptions of biological systems into the million-to-billion atom regime, in which numerous cell functions reside. In this opinion, we review the progress, driven by large-scale molecular dynamics simulations, in the study of viruses, ribosomes, bioenergetic systems, and other diverse applications. These examples highlight the utility of molecular dynamics simulations in the critical task of relating atomic detail to the function of supramolecular complexes, a task that cannot be achieved by smaller-scale simulations or existing experimental approaches alone. PMID:25845770

  11. REXOR 2 rotorcraft simulation model. Volume 1: Engineering documentation

    NASA Technical Reports Server (NTRS)

    Reaser, J. S.; Kretsinger, P. H.

    1978-01-01

    A rotorcraft nonlinear simulation called REXOR II, divided into three volumes, is described. The first volume is a development of rotorcraft mechanics and aerodynamics. The second is a development and explanation of the computer code required to implement the equations of motion. The third volume is a user's manual, and contains a description of code input/output as well as operating instructions.

  12. Large-Volume High-Pressure Mineral Physics in Japan

    NASA Astrophysics Data System (ADS)

    Liebermann, Robert C.; Prewitt, Charles T.; Weidner, Donald J.

    American high-pressure research with large sample volumes developed rapidly in the 1950s during the race to produce synthetic diamonds. At that time the piston cylinder, girdle (or belt), and tetrahedral anvil devices were invented. However, this development essentially stopped in the late 1950s, and while the diamond anvil cell has been used extensively in the United States with spectacular success for high-pressure experiments in small sample volumes, most of the significant technological advances in large-volume devices have taken place in Japan. Over the past 25 years, these technical advances have enabled a fourfold increase in pressure, with many important investigations of the chemical and physical properties of materials synthesized at high temperatures and pressures that cannot be duplicated with any apparatus currently available in the United States.

  13. A Warm Magnetoactive Plasma in a Large Volume of Space

    NASA Technical Reports Server (NTRS)

    Heiles, C.

    1984-01-01

    A diffuse ionized warm gas fills a large volume of space in the general direction of Radio Loop II. There are three types of observational evidence: Faraday rotation measures (RM's) of extragalactic sources; emission measures (EM's) derived from the H alpha emission line in the diffuse interstellar medium; and magnetic field strengths in HI clouds derived from Zeeman splitting observations.

  14. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  15. Large discharge-volume, silent discharge spark plug

    DOEpatents

    Kang, Michael

    1995-01-01

    A large discharge-volume spark plug for providing self-limiting microdischarges. The apparatus includes a generally spark plug-shaped arrangement of a pair of electrodes, where either of the two coaxial electrodes is substantially shielded by a dielectric barrier from a direct discharge from the other electrode, the unshielded electrode and the dielectric barrier forming an annular volume in which self-terminating microdischarges occur when alternating high voltage is applied to the center electrode. The large area over which the discharges occur, and the large number of possible discharges within the period of an engine cycle, make the present silent discharge plasma spark plug suitable for use as an ignition source for engines. In the situation, where a single discharge is effective in causing ignition of the combustible gases, a conventional single-polarity, single-pulse, spark plug voltage supply may be used.

  16. Large eddy simulation in the ocean

    NASA Astrophysics Data System (ADS)

    Scotti, Alberto

    2010-12-01

    Large eddy simulation (LES) is a relative newcomer to oceanography. In this review, both applications of traditional LES to oceanic flows and new oceanic LES still in an early stage of development are discussed. The survey covers LES applied to boundary layer flows, traditionally an area where LES has provided considerable insight into the physics of the flow, as well as more innovative applications, where new SGS closure schemes need to be developed. The merging of LES with large-scale models is also briefly reviewed.

  17. Spatial considerations during cryopreservation of a large volume sample.

    PubMed

    Kilbride, Peter; Lamb, Stephen; Milne, Stuart; Gibbons, Stephanie; Erro, Eloy; Bundy, James; Selden, Clare; Fuller, Barry; Morris, John

    2016-08-01

    There have been relatively few studies on the implications of the physical conditions experienced by cells during large volume (litres) cryopreservation - most studies have focused on the problem of cryopreservation of smaller volumes, typically up to 2 ml. This study explores the effects of ice growth by progressive solidification, generally seen during larger scale cryopreservation, on encapsulated liver hepatocyte spheroids, and it develops a method to reliably sample different regions across the frozen cores of samples experiencing progressive solidification. These issues are examined in the context of a Bioartificial Liver Device which requires cryopreservation of a 2 L volume in a strict cylindrical geometry for optimal clinical delivery. Progressive solidification cannot be avoided in this arrangement. In such a system optimal cryoprotectant concentrations and cooling rates are known. However, applying these parameters to a large volume is challenging due to the thermal mass and subsequent thermal lag. The specific impact of this to the cryopreservation outcome is required. Under conditions of progressive solidification, the spatial location of Encapsulated Liver Spheroids had a strong impact on post-thaw recovery. Cells in areas first and last to solidify demonstrated significantly impaired post-thaw function, whereas areas solidifying through the majority of the process exhibited higher post-thaw outcome. It was also found that samples where the ice thawed more rapidly had greater post-thaw viability 24 h post-thaw (75.7 ± 3.9% and 62.0 ± 7.2% respectively). These findings have implications for the cryopreservation of large volumes with a rigid shape and for the cryopreservation of a Bioartificial Liver Device. PMID:27256662

  18. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree

  19. AdS/CFT and Large-N Volume Independence

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2010-08-26

    We study the Eguchi-Kawai reduction in the strong-coupling domain of gauge theories via the gravity dual of N=4 super-Yang-Mills on R{sup 3} x S{sup 1}. We show that D-branes geometrize volume independence in the center-symmetric vacuum and give supergravity predictions for the range of validity of reduced large-N models at strong coupling.

  20. Large volume multiple-path nuclear pumped laser

    SciTech Connect

    Hohl, F.; Deyoung, R.J.

    1981-11-01

    Large volumes of gas are excited by using internal high reflectance mirrors that are arranged so that the optical path crosses back and forth through the excited gaseous medium. By adjusting the external dielectric mirrors of the laser, the number of paths through the laser cavity can be varied. Output powers were obtained that are substantially higher than the output powers of previous nuclear laser systems. Official Gazette of the U.S. Patent and Trademark Office

  1. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  2. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  3. Staged-volume radiosurgery for large arteriovenous malformations: a review.

    PubMed

    AlKhalili, Kenan; Chalouhi, Nohra; Tjoumakaris, Stavropoula; Rosenwasser, Robert; Jabbour, Pascal

    2014-09-01

    Stereotactic radiosurgery is an effective management strategy for properly selected patients with arteriovenous malformations (AVMs). However, the risk of postradiosurgical radiation-related injury is higher in patients with large AVMs. Multistaged volumetric management of large AVMs was undertaken to limit the radiation exposure to the surrounding normal brain. This strategy offers a promising method for obtaining high AVM obliteration rates with minimal normal tissue damage. The use of embolization as an adjunctive method in the treatment of large AVMs remains controversial. Unfortunately, staged-volume radiosurgery (SVR) has a number of potential pitfalls that affect the outcome. The aim of this article is to highlight the role of SVR in the treatment of large AVMs, to discuss the outcome comparing it to other treatment modalities, and to discuss the potential improvement that could be introduced to this method of treatment. PMID:25175440

  4. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  5. Large volume high-pressure cell for inelastic neutron scattering.

    PubMed

    Wang, W; Sokolov, D A; Huxley, A D; Kamenev, K V

    2011-07-01

    Inelastic neutron scattering measurements typically require two orders of magnitude longer data collection times and larger sample sizes than neutron diffraction studies. Inelastic neutron scattering measurements on pressurised samples are particularly challenging since standard high-pressure apparatus restricts sample volume, attenuates the incident and scattered beams, and contributes background scattering. Here, we present the design of a large volume two-layered piston-cylinder pressure cell with optimised transmission for inelastic neutron scattering experiments. The design and the materials selected for the construction of the cell enable its safe use to a pressure of 1.8 GPa with a sample volume in excess of 400 mm(3). The design of the piston seal eliminates the need for a sample container, thus providing a larger sample volume and reduced absorption. The integrated electrical plug with a manganin pressure gauge offers an accurate measurement of pressure over the whole range of operational temperatures. The performance of the cell is demonstrated by an inelastic neutron scattering study of UGe(2). PMID:21806195

  6. Simulating Pressure Effects of High-Flow Volumes

    NASA Technical Reports Server (NTRS)

    Kaufman, M.

    1985-01-01

    Dynamic test stresses realized without high-volume pumps. Assembled in Sections in gas-flow passage, contoured mandrel restricts flow rate to valve convenient for testing and spatially varies pressure on passage walls to simulate operating-pressure profile. Realistic test pressures thereby achieved without extremely high flow volumes.

  7. The Large Area Pulsed Solar Simulator (LAPSS)

    NASA Technical Reports Server (NTRS)

    Mueller, R. L.

    1993-01-01

    A Large Area Pulsed Solar Simulator (LAPSS) has been installed at JPL. It is primarily intended to be used to illuminate and measure the electrical performance of photovoltaic devices. The simulator, originally manufactured by Spectrolab, Sylmar, California, occupies an area measuring about 3 meters wide by 12 meters long. The data acquisition and data processing subsystems have been modernized. Tests on the LAPSS performance resulted in better than +/- 2 percent uniformity of irradiance at the test plane and better than +/- 0.3 percent measurement repeatability after warm-up. Glass absorption filters are used to reduce the level of ultraviolet light emitted from the xenon flash lamps. This provides a close match to standard airmass zero and airmass 1.5 spectral irradiance distributions. The 2 millisecond light pulse prevents heating of the device under test, resulting in more reliable temperature measurements. Overall, excellent electrical performance measurements have been made of many different types and sizes of photovoltaic devices.

  8. Numerical simulation of large fabric filter

    NASA Astrophysics Data System (ADS)

    Sedláček, Jan; Kovařík, Petr

    2012-04-01

    Fabric filters are used in the wide range of industrial technologies for cleaning of incoming or exhaust gases. To achieve maximal efficiency of the discrete phase separation and long lifetime of the filter hoses, it is necessary to ensure uniform load on filter surface and to avoid impacts of heavy particles with high velocities to the filter hoses. The paper deals with numerical simulation of two phase flow field in a large fabric filter. The filter is composed of six chambers with approx. 1600 filter hoses in total. The model was simplified to one half of the filter, the filter hoses walls were substituted by porous zones. The model settings were based on experimental data, especially on the filter pressure drop. Unsteady simulations with different turbulence models were done. Flow field together with particles trajectories were analyzed. The results were compared with experimental observations.

  9. Large eddy simulations in 2030 and beyond.

    PubMed

    Piomelli, U

    2014-08-13

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier-Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise. PMID:25024415

  10. Large eddy simulations in 2030 and beyond

    PubMed Central

    Piomelli, U

    2014-01-01

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier–Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise. PMID:25024415

  11. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05 simulations, in particular, allow us to also study the role and impact of the nuclear symmetry energy on these pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  12. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  13. Strategies for Interactive Visualization of Large Scale Climate Simulations

    NASA Astrophysics Data System (ADS)

    Xie, J.; Chen, C.; Ma, K.; Parvis

    2011-12-01

    single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.

  14. Geometric Measures of Large Biomolecules: Surface, Volume and Pockets

    PubMed Central

    Mach, Paul; Koehl, Patrice

    2011-01-01

    Geometry plays a major role in our attempt to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent (n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 seconds with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. PMID:21823134

  15. The Large Area Pulsed Solar Simulator (LAPSS)

    NASA Technical Reports Server (NTRS)

    Mueller, R. L.

    1994-01-01

    The Large Area Pulsed Solar Simulator (LAPSS) has been installed at JPL. It is primarily intended to be used to illuminate and measure the electrical performance of photovoltaic devices. The simulator, originally manufactured by Spectrolab, Sylmar, CA, occupies an area measuring about 3 m wide x 12 m long. The data acquisition and data processing subsystems have been modernized. Tests on the LAPSS performance resulted in better than plus or minus 2 percent uniformity of irradiance at the test plane and better than plus or minus 0.3 percent measurement repeatability after warm-up. Glass absorption filters reduce the ultraviolet light emitted from the xenon flash lamps. This results in a close match to three different standard airmass zero and airmass 1.5 spectral irradiances. The 2-ms light pulse prevents heating of the device under test, resulting in more reliable temperature measurements. Overall, excellent electrical performance measurements have been made of many different types and sizes of photovoltaic devices. Since the original printing of this publication, in 1993, the LAPSS has been operational and new capabilities have been added. This revision includes a new section relating to the installation of a method to measure the I-V curve of a solar cell or array exhibiting a large effective capacitance. Another new section has been added relating to new capabilities for plotting single and multiple I-V curves, and for archiving the I-V data and test parameters. Finally, a section has been added regarding the data acquisition electronics calibration.

  16. Large eddy simulation of longitudinal stationary vortices

    NASA Astrophysics Data System (ADS)

    Sreedhar, Madhu; Ragab, Saad

    1994-07-01

    The response of longitudinal stationary vortices when subjected to random perturbations is investigated using temporal large-eddy simulation. Simulations are obtained for high Reynolds numbers and at a low subsonic Mach number. The subgrid-scale stress tensor is modeled using the dynamic eddy-viscosity model. The generation of large-scale structures due to centrifugal instability and their subsequent breakdown to turbulence is studied. The following events are observed. Initially, ring-shaped structures appear around the vortex core. These structures are counter-rotating vortices similar to the donut-shaped structures observed in a Taylor-Couette flow between rotating cylinders. These structures subsequently interact with the vortex core resulting in a rapid decay of the vortex. The turbulent kinetic energy increases rapidly until saturation, and then a period of slow decay prevails. During the period of maximum turbulent kinetic energy, the normalized mean circulation profile exhibits a logarithmic region, in agreement with the universal inner profile of Hoffman and Joubert [J. Fluid Mech. 16, 395 (1963)].

  17. The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data

    NASA Technical Reports Server (NTRS)

    Hanke, C. R.; Nordwall, D. R.

    1970-01-01

    The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.

  18. Effect of large volume paracentesis on plasma volume--a cause of hypovolemia

    SciTech Connect

    Kao, H.W.; Rakov, N.E.; Savage, E.; Reynolds, T.B.

    1985-05-01

    Large volume paracentesis, while effectively relieving symptoms in patients with tense ascites, has been generally avoided due to reports of complications attributed to an acute reduction in intravascular volume. Measurements of plasma volume in these subjects have been by indirect methods and have not uniformly confirmed hypovolemia. We have prospectively evaluated 18 patients (20 paracenteses) with tense ascites and peripheral edema due to chronic liver disease undergoing 5 liter paracentesis for relief of symptoms. Plasma volume pre- and postparacentesis was assessed by a /sup 125/I-labeled human serum albumin dilution technique as well as by the change in hematocrit and postural blood pressure difference. No significant change in serum sodium, urea nitrogen, hematocrit or postural systolic blood pressure difference was noted at 24 or 48 hr after paracentesis. Serum creatinine at 24 hr after paracentesis was unchanged but a small but statistically significant increase in serum creatinine was noted at 48 hr postparacentesis. Plasma volume changed -2.7% (n = 6, not statistically significant) during the first 24 hr and -2.8% (n = 12, not statistically significant) during the 0- to 48-hr period. No complications from paracentesis were noted. These results suggest that 5 liter paracentesis for relief of symptoms is safe in patients with tense ascites and peripheral edema from chronic liver disease.

  19. Large volume loss during cleavage formation, Hamburg sequence, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Beutner, Edward C.; Charles, Emmanuel G.

    1985-11-01

    Green reduction spots in red slate of the Hamburg sequence exposed near Shartlesville, Pennsylvania, have axial ratios of 1.42:1.0:0.28 on the limbs of near-isoclinal folds and 1.0:0.79:0.41 in fold hinge zones. Conodont cusps and denticles within the reduction spots have been brittlely pulled apart and give independent measures of extension in various directions. Comparison of conodont extensions with reduction spot shapes on limbs and hinges indicates that sedimentary compaction of 44% preceded the tectonic strain associated with cleavage formation. This strain, having identical maximum extensions but greater shortening in fold hinges as compared to limbs, was characterized by 41% extension in X, no change in Y, 50% to 59% shortening in Z, and 29% to 42% tectonic volume loss. The general lack of directed overgrowths on grains reflects the large volume loss and contrasts with other slates, where deformation was an almost constant volume process and extension in X compensated for shortening in Z. *Present address: Department of Geology, Miami University, Oxford, Ohio 45056

  20. Flight Simulation Model Exchange. Volume 2; Appendices

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the appendices to the main report.

  1. Flight Simulation Model Exchange. Volume 1

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the results of the assessment.

  2. Large Eddy Simulation of Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Wu, Ting; Cotton, William R.

    1999-01-01

    The Regional Atmospheric Modeling System (RAMS) with mesoscale interactive nested-grids and a Large-Eddy Simulation (LES) version of RAMS, coupled to two-moment microphysics and a new two-stream radiative code were used to investigate the dynamic, microphysical, and radiative aspects of the November 26, 1991 cirrus event. Wu (1998) describes the results of that research in full detail and is enclosed as Appendix 1. The mesoscale nested grid simulation successfully reproduced the large scale circulation as compared to the Mesoscale Analysis and Prediction System's (MAPS) analyses and other observations. Three cloud bands which match nicely to the three cloud lines identified in an observational study (Mace et al., 1995) are predicted on Grid #2 of the nested grids, even though the mesoscale simulation predicts a larger west-east cloud width than what was observed. Large-eddy simulations (LES) were performed to study the dynamical, microphysical, and radiative processes in the 26 November 1991 FIRE 11 cirrus event. The LES model is based on the RAMS version 3b developed at Colorado State University. It includes a new radiation scheme developed by Harrington (1997) and a new subgrid scale model developed by Kosovic (1996). The LES model simulated a single cloud layer for Case 1 and a two-layer cloud structure for Case 2. The simulations demonstrated that latent heat release can play a significant role in the formation and development of cirrus clouds. For the thin cirrus in Case 1, the latent heat release was insufficient for the cirrus clouds to become positively buoyant. However, in some special cases such as Case 2, positively buoyant cells can be embedded within the cirrus layers. These cells were so active that the rising updraft induced its own pressure perturbations that affected the cloud evolution. Vertical profiles of the total radiative and latent heating rates indicated that for well developed, deep, and active cirrus clouds, radiative cooling and latent

  3. Efficient Large Volume Lentiviral Vector Production Using Flow Electroporation

    PubMed Central

    Witting, Scott R.; Li, Lin-Hong; Jasti, Aparna; Allen, Cornell; Cornetta, Kenneth; Brady, James; Shivakumar, Rama

    2012-01-01

    Abstract Lentiviral vectors are beginning to emerge as a viable choice for human gene therapy. Here, we describe a method that combines the convenience of a suspension cell line with a scalable, nonchemically based, and GMP-compliant transfection technique known as flow electroporation (EP). Flow EP parameters for serum-free adapted HEK293FT cells were optimized to limit toxicity and maximize titers. Using a third generation, HIV-based, lentiviral vector system pseudotyped with the vesicular stomatitis glycoprotein envelope, both small- and large-volume transfections produced titers over 1×108 infectious units/mL. Therefore, an excellent option for implementing large-scale, clinical lentiviral productions is flow EP of suspension cell lines. PMID:21933028

  4. Autonomic Closure for Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    King, Ryan; Hamlington, Peter; Dahm, Werner J. A.

    2015-11-01

    A new autonomic subgrid-scale closure has been developed for large eddy simulation (LES). The approach poses a supervised learning problem that captures nonlinear, nonlocal, and nonequilibrium turbulence effects without specifying a predefined turbulence model. By solving a regularized optimization problem on test filter scale quantities, the autonomic approach identifies a nonparametric function that represents the best local relation between subgrid stresses and resolved state variables. The optimized function is then applied at the grid scale to determine unknown LES subgrid stresses by invoking scale similarity in the inertial range. A priori tests of the autonomic approach on homogeneous isotropic turbulence show that the new approach is amenable to powerful optimization and machine learning methods and is successful for a wide range of filter scales in the inertial range. In these a priori tests, the autonomic closure substantially improves upon the dynamic Smagorinsky model in capturing the instantaneous, statistical, and energy transfer properties of the subgrid stress field.

  5. Large eddy simulation applications in gas turbines.

    PubMed

    Menzies, Kevin

    2009-07-28

    The gas turbine presents significant challenges to any computational fluid dynamics techniques. The combination of a wide range of flow phenomena with complex geometry is difficult to model in the context of Reynolds-averaged Navier-Stokes (RANS) solvers. We review the potential for large eddy simulation (LES) in modelling the flow in the different components of the gas turbine during a practical engineering design cycle. We show that while LES has demonstrated considerable promise for reliable prediction of many flows in the engine that are difficult for RANS it is not a panacea and considerable application challenges remain. However, for many flows, especially those dominated by shear layer mixing such as in combustion chambers and exhausts, LES has demonstrated a clear superiority over RANS for moderately complex geometries although at significantly higher cost which will remain an issue in making the calculations relevant within the design cycle. PMID:19531505

  6. Parallel Optimization with Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Talnikar, Chaitanya; Blonigan, Patrick; Bodart, Julien; Wang, Qiqi; Alex Gorodetsky Collaboration; Jasper Snoek Collaboration

    2014-11-01

    For design optimization results to be useful, the model used must be trustworthy. For turbulent flows, Large Eddy Simulations (LES) can capture separation and other phenomena that traditional models such as RANS struggle with. However, optimization with LES can be challenging because of noisy objective function evaluations. This noise is a consequence of the sampling error of turbulent statistics, or long time averaged quantities of interest, such as the drag of an airfoil or heat transfer to a turbine blade. The sampling error causes the objective function to vary noisily with respect to design parameters for finite time simulations. Furthermore, the noise decays very slowly as computational time increases. Therefore, robustness with noisy objective functions is a crucial prerequisite to optimization candidates for LES. One way of dealing with noisy objective functions is to filter the noise using a surrogate model. Bayesian optimization, which uses Gaussian processes as surrogates, has shown promise in optimizing expensive objective functions. The following talk presents a new approach for optimization with LES incorporating these ideas. Applications to flow control of a turbulent channel and the design of a turbine blade trailing edge are also discussed.

  7. Large eddy simulation of trailing edge noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Nitzkorski, Zane; Mahesh, Krishnan

    2015-11-01

    Noise generation is an important engineering constraint to many marine vehicles. A significant portion of the noise comes from propellers and rotors, specifically due to flow interactions at the trailing edge. Large eddy simulation is used to investigate the noise produced by a turbulent 45 degree beveled trailing edge and a NACA 0012 airfoil. A porous surface Ffowcs-Williams and Hawkings acoustic analogy is combined with a dynamic endcapping method to compute the sound. This methodology allows for the impact of incident flow noise versus the total noise to be assessed. LES results for the 45 degree beveled trailing edge are compared to experiment at M = 0 . 1 and Rec = 1 . 9 e 6 . The effect of boundary layer thickness on sound production is investigated by computing using both the experimental boundary layer thickness and a thinner boundary layer. Direct numerical simulation results of the NACA 0012 are compared to available data at M = 0 . 4 and Rec = 5 . 0 e 4 for both the hydrodynamic field and the acoustic field. Sound intensities and directivities are investigated and compared. Finally, some of the physical mechanisms of far-field noise generation, common to the two configurations, are discussed. Supported by Office of Naval research.

  8. Large eddy simulation of turbulent cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, A.; Mahesh, K.

    2015-12-01

    Large Eddy Simulation is employed to study two turbulent cavitating flows: over a cylinder and a wedge. A homogeneous mixture model is used to treat the mixture of water and water vapor as a compressible fluid. The governing equations are solved using a novel predictor- corrector method. The subgrid terms are modeled using the Dynamic Smagorinsky model. Cavitating flow over a cylinder at Reynolds number (Re) = 3900 and cavitation number (σ) = 1.0 is simulated and the wake characteristics are compared to the single phase results at the same Reynolds number. It is observed that cavitation suppresses turbulence in the near wake and delays three dimensional breakdown of the vortices. Next, cavitating flow over a wedge at Re = 200, 000 and σ = 2.0 is presented. The mean void fraction profiles obtained are compared to experiment and good agreement is obtained. Cavity auto-oscillation is observed, where the sheet cavity breaks up into a cloud cavity periodically. The results suggest LES as an attractive approach for predicting turbulent cavitating flows.

  9. Simulation of large acceptance LINAC for muons

    SciTech Connect

    Miyadera, H; Kurennoy, S; Jason, A J

    2010-01-01

    There has been a recent need for muon accelerators not only for future Neutrino Factories and Muon Colliders but also for other applications in industry and medical use. We carried out simulations on a large-acceptance muon linac with a new concept 'mixed buncher/acceleration'. The linac can accept pions/muons from a production target with large acceptance and accelerate muon without any beam cooling which makes the initial section of muon-linac system very compact. The linac has a high impact on Neutrino Factory and Muon Collider (NF/MC) scenario since the 300-m injector section can be replaced by the muon linac of only 10-m length. The current design of the linac consists of the following components: independent 805-MHz cavity structure with 6- or 8-cm-radius aperture window; injection of a broad range of pion/muon energies, 10-100 MeV, and acceleration to 150 - 200 MeV. Further acceleration of the muon beam are relatively easy since the beam is already bunched.

  10. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  11. Large eddy simulations of laminar separation bubble

    NASA Astrophysics Data System (ADS)

    Cadieux, Francois

    The flow over blades and airfoils at moderate angles of attack and Reynolds numbers ranging from ten thousand to a few hundred thousands undergoes separation due to the adverse pressure gradient generated by surface curvature. In many cases, the separated shear layer then transitions to turbulence and reattaches, closing off a recirculation region -- the laminar separation bubble. To avoid body-fitted mesh generation problems and numerical issues, an equivalent problem for flow over a flat plate is formulated by imposing boundary conditions that lead to a pressure distribution and Reynolds number that are similar to those on airfoils. Spalart & Strelet (2000) tested a number of Reynolds-averaged Navier-Stokes (RANS) turbulence models for a laminar separation bubble flow over a flat plate. Although results with the Spalart-Allmaras turbulence model were encouraging, none of the turbulence models tested reliably recovered time-averaged direct numerical simulation (DNS) results. The purpose of this work is to assess whether large eddy simulation (LES) can more accurately and reliably recover DNS results using drastically reduced resolution -- on the order of 1% of DNS resolution which is commonly achievable for LES of turbulent channel flows. LES of a laminar separation bubble flow over a flat plate are performed using a compressible sixth-order finite-difference code and two incompressible pseudo-spectral Navier-Stokes solvers at resolutions corresponding to approximately 3% and 1% of the chosen DNS benchmark by Spalart & Strelet (2000). The finite-difference solver is found to be dissipative due to the use of a stability-enhancing filter. Its numerical dissipation is quantified and found to be comparable to the average eddy viscosity of the dynamic Smagorinsky model, making it difficult to separate the effects of filtering versus those of explicit subgrid-scale modeling. The negligible numerical dissipation of the pseudo-spectral solvers allows an unambiguous

  12. Cardiovascular simulator improvement: pressure versus volume loop assessment.

    PubMed

    Fonseca, Jeison; Andrade, Aron; Nicolosi, Denys E C; Biscegli, José F; Leme, Juliana; Legendre, Daniel; Bock, Eduardo; Lucchi, Julio Cesar

    2011-05-01

    This article presents improvement on a physical cardiovascular simulator (PCS) system. Intraventricular pressure versus intraventricular volume (PxV) loop was obtained to evaluate performance of a pulsatile chamber mimicking the human left ventricle. PxV loop shows heart contractility and is normally used to evaluate heart performance. In many heart diseases, the stroke volume decreases because of low heart contractility. This pathological situation must be simulated by the PCS in order to evaluate the assistance provided by a ventricular assist device (VAD). The PCS system is automatically controlled by a computer and is an auxiliary tool for VAD control strategies development. This PCS system is according to a Windkessel model where lumped parameters are used for cardiovascular system analysis. Peripheral resistance, arteries compliance, and fluid inertance are simulated. The simulator has an actuator with a roller screw and brushless direct current motor, and the stroke volume is regulated by the actuator displacement. Internal pressure and volume measurements are monitored to obtain the PxV loop. Left chamber internal pressure is directly obtained by pressure transducer; however, internal volume has been obtained indirectly by using a linear variable differential transformer, which senses the diaphragm displacement. Correlations between the internal volume and diaphragm position are made. LabVIEW integrates these signals and shows the pressure versus internal volume loop. The results that have been obtained from the PCS system show PxV loops at different ventricle elastances, making possible the simulation of pathological situations. A preliminary test with a pulsatile VAD attached to PCS system was made. PMID:21595711

  13. SUSY’s Ladder: Reframing sequestering at Large Volume

    DOE PAGESBeta

    Reece, Matthew; Xue, Wei

    2016-04-07

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague othermore » supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.« less

  14. Large volume water sprays for dispersing warm fogs

    NASA Technical Reports Server (NTRS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.

    1986-01-01

    A new method for dispersing of warm fogs which impede visibility and alter schedules is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray-induced air flow; the fog droplets are removed by coalescence/rainout. The efficiency of this fog droplet removal process depends on the size spectra of the spray drops and optimum spray drop size is calculated as between 0.3-1.0 mm in diameter. Water spray tests were conducted in order to determine the drop size spectra and temperature response of sprays produced by commercially available fire-fighting nozzles, and nozzle array tests were utilized to study air flow patterns and the thermal properties of the overall system. The initial test data reveal that the fog-dispersal procedure is effective.

  15. Large volume water sprays for dispersing warm fogs

    NASA Astrophysics Data System (ADS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.

    A new method for dispersing of warm fogs which impede visibility and alter schedules is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray-induced air flow; the fog droplets are removed by coalescence/rainout. The efficiency of this fog droplet removal process depends on the size spectra of the spray drops and optimum spray drop size is calculated as between 0.3-1.0 mm in diameter. Water spray tests were conducted in order to determine the drop size spectra and temperature response of sprays produced by commercially available fire-fighting nozzles, and nozzle array tests were utilized to study air flow patterns and the thermal properties of the overall system. The initial test data reveal that the fog-dispersal procedure is effective.

  16. Large space telescope, phase A. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Phase A study of the Large Space Telescope (LST) is reported. The study defines an LST concept based on the broad mission guidelines provided by the Office of Space Science (OSS), the scientific requirements developed by OSS with the scientific community, and an understanding of long range NASA planning current at the time the study was performed. The LST is an unmanned astronomical observatory facility, consisting of an optical telescope assembly (OTA), scientific instrument package (SIP), and a support systems module (SSM). The report consists of five volumes. The report describes the constraints and trade off analyses that were performed to arrive at a reference design for each system and for the overall LST configuration. A low cost design approach was followed in the Phase A study. This resulted in the use of standard spacecraft hardware, the provision for maintenance at the black box level, growth potential in systems designs, and the sharing of shuttle maintenance flights with other payloads.

  17. Large eddy simulation of powered Fontan hemodynamics.

    PubMed

    Delorme, Y; Anupindi, K; Kerlo, A E; Shetty, D; Rodefeld, M; Chen, J; Frankel, S

    2013-01-18

    Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2-3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3-5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a "biventricular Fontan" circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo(TM)) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data. PMID:23177085

  18. Large Eddy Simulation of Powered Fontan Hemodynamics

    PubMed Central

    Delorme, Y.; Anupindi, K.; Kerlo, A.E.; Shetty, D.; Rodefeld, M.; Chen, J.; Frankel, S.

    2012-01-01

    Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2–3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3–5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a “biventricular Fontan” circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo™) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data. PMID:23177085

  19. Striped Bass, morone saxatilis, egg incubation in large volume jars

    USGS Publications Warehouse

    Harper, C.J.; Wrege, B.M.; Jeffery, Isely J.

    2010-01-01

    The standard McDonald jar was compared with a large volume jar for striped bass, Morone saxatilis, egg incubation. The McDonald jar measured 16 cm in diameter by 45 cm in height and had a volume of 6 L. The experimental jar measured 0.4 m in diameter by 1.3 m in height and had a volume of 200 L. The hypothesis is that there is no difference in percent survival of fry hatched in experimental jars compared with McDonald jars. Striped bass brood fish were collected from the Coosa River and spawned using the dry spawn method of fertilization. Four McDonald jars were stocked with approximately 150 g of eggs each. Post-hatch survival was estimated at 48, 96, and 144 h. Stocking rates resulted in an average egg loading rate (??1 SE) in McDonald jars of 21.9 ?? 0.03 eggs/mL and in experimental jars of 10.9 ?? 0.57 eggs/mL. The major finding of this study was that average fry survival was 37.3 ?? 4.49% for McDonald jars and 34.2 ?? 3.80% for experimental jars. Although survival in experimental jars was slightly less than in McDonald jars, the effect of container volume on survival to 48 h (F = 6.57; df = 1,5; P > 0.05), 96 h (F = 0.02; df = 1, 4; P > 0.89), and 144 h (F = 3.50; df = 1, 4; P > 0.13) was not statistically significant. Mean survival between replicates ranged from 14.7 to 60.1% in McDonald jars and from 10.1 to 54.4% in experimental jars. No effect of initial stocking rate on survival (t = 0.06; df = 10; P > 0.95) was detected. Experimental jars allowed for incubation of a greater number of eggs in less than half the floor space of McDonald jars. As hatchery production is often limited by space or water supply, experimental jars offer an alternative to extending spawning activities, thereby reducing labor and operations cost. As survival was similar to McDonald jars, the experimental jar is suitable for striped bass egg incubation. ?? Copyright by the World Aquaculture Society 2010.

  20. Simulation of preburner sprays, volume 1

    NASA Technical Reports Server (NTRS)

    1993-01-01

    nozzles were compared with that of three identical nozzles with their axis at a small distance from each other. This study simulates the sprays in the preburner of the SSME, where there are around 260 elements on the faceplate of the combustion chamber. And lastly, design an experimental facility to study the characteristics of sprays at high pressure conditions and at supercritical pressure and temperature for the gas but supercritical pressure and subcritical temperature for the liquid.

  1. Simulation of preburner sprays, volume 1

    NASA Astrophysics Data System (ADS)

    1993-05-01

    nozzles were compared with that of three identical nozzles with their axis at a small distance from each other. This study simulates the sprays in the preburner of the SSME, where there are around 260 elements on the faceplate of the combustion chamber. And lastly, design an experimental facility to study the characteristics of sprays at high pressure conditions and at supercritical pressure and temperature for the gas but supercritical pressure and subcritical temperature for the liquid.

  2. Large Scale Computer Simulation of Erthocyte Membranes

    NASA Astrophysics Data System (ADS)

    Harvey, Cameron; Revalee, Joel; Laradji, Mohamed

    2007-11-01

    The cell membrane is crucial to the life of the cell. Apart from partitioning the inner and outer environment of the cell, they also act as a support of complex and specialized molecular machinery, important for both the mechanical integrity of the cell, and its multitude of physiological functions. Due to its relative simplicity, the red blood cell has been a favorite experimental prototype for investigations of the structural and functional properties of the cell membrane. The erythrocyte membrane is a composite quasi two-dimensional structure composed essentially of a self-assembled fluid lipid bilayer and a polymerized protein meshwork, referred to as the cytoskeleton or membrane skeleton. In the case of the erythrocyte, the polymer meshwork is mainly composed of spectrin, anchored to the bilayer through specialized proteins. Using a coarse-grained model, recently developed by us, of self-assembled lipid membranes with implicit solvent and using soft-core potentials, we simulated large scale red-blood-cells bilayers with dimensions ˜ 10-1 μm^2, with explicit cytoskeleton. Our aim is to investigate the renormalization of the elastic properties of the bilayer due to the underlying spectrin meshwork.

  3. Large-Eddy Simulation of Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Pruett, C. David; Sochacki, James S.

    1999-01-01

    This report summarizes work accomplished under a one-year NASA grant from NASA Langley Research Center (LaRC). The effort culminates three years of NASA-supported research under three consecutive one-year grants. The period of support was April 6, 1998, through April 5, 1999. By request, the grant period was extended at no-cost until October 6, 1999. Its predecessors have been directed toward adapting the numerical tool of large-eddy simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of subgrid-scale (SGS) models that incorporate time-domain filters.

  4. Large Eddy Simulation of Transitional Boundary Layer

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Moin, Parviz

    2009-11-01

    A sixth order compact finite difference code is employed to investigate compressible Large Eddy Simulation (LES) of subharmonic transition of a spatially developing zero pressure gradient boundary layer, at Ma = 0.2. The computational domain extends from Rex= 10^5, where laminar blowing and suction excites the most unstable fundamental and sub-harmonic modes, to fully turbulent stage at Rex= 10.1x10^5. Numerical sponges are used in the neighborhood of external boundaries to provide non-reflective conditions. Our interest lies in the performance of the dynamic subgrid scale (SGS) model [1] in the transition process. It is observed that in early stages of transition the eddy viscosity is much smaller than the physical viscosity. As a result the amplitudes of selected harmonics are in very good agreement with the experimental data [2]. The model's contribution gradually increases during the last stages of transition process and the dynamic eddy viscosity becomes fully active and dominant in the turbulent region. Consistent with this trend the skin friction coefficient versus Rex diverges from its laminar profile and converges to the turbulent profile after an overshoot. 1. Moin P. et. al. Phys Fluids A, 3(11), 2746-2757, 1991. 2. Kachanov Yu. S. et. al. JFM, 138, 209-247, 1983.

  5. Volume visualization of multiple alignment of large genomicDNA

    SciTech Connect

    Shah, Nameeta; Dillard, Scott E.; Weber, Gunther H.; Hamann, Bernd

    2005-07-25

    Genomes of hundreds of species have been sequenced to date, and many more are being sequenced. As more and more sequence data sets become available, and as the challenge of comparing these massive ''billion basepair DNA sequences'' becomes substantial, so does the need for more powerful tools supporting the exploration of these data sets. Similarity score data used to compare aligned DNA sequences is inherently one-dimensional. One-dimensional (1D) representations of these data sets do not effectively utilize screen real estate. As a result, tools using 1D representations are incapable of providing informatory overview for extremely large data sets. We present a technique to arrange 1D data in 3D space to allow us to apply state-of-the-art interactive volume visualization techniques for data exploration. We demonstrate our technique using multi-millions-basepair-long aligned DNA sequence data and compare it with traditional 1D line plots. The results show that our technique is superior in providing an overview of entire data sets. Our technique, coupled with 1D line plots, results in effective multi-resolution visualization of very large aligned sequence data sets.

  6. Monte Carlo Simulations for Dosimetry in Prostate Radiotherapy with Different Intravesical Volumes and Planning Target Volume Margins

    PubMed Central

    Lv, Wei; Yu, Dong; He, Hengda; Liu, Qian

    2016-01-01

    In prostate radiotherapy, the influence of bladder volume variation on the dose absorbed by the target volume and organs at risk is significant and difficult to predict. In addition, the resolution of a typical medical image is insufficient for visualizing the bladder wall, which makes it more difficult to precisely evaluate the dose to the bladder wall. This simulation study aimed to quantitatively investigate the relationship between the dose received by organs at risk and the intravesical volume in prostate radiotherapy. The high-resolution Visible Chinese Human phantom and the finite element method were used to construct 10 pelvic models with specific intravesical volumes ranging from 100 ml to 700 ml to represent bladders of patients with different bladder filling capacities during radiotherapy. This series of models was utilized in six-field coplanar 3D conformal radiotherapy simulations with different planning target volume (PTV) margins. Each organ’s absorbed dose was calculated using the Monte Carlo method. The obtained bladder wall displacements during bladder filling were consistent with reported clinical measurements. The radiotherapy simulation revealed a linear relationship between the dose to non-targeted organs and the intravesical volume and indicated that a 10-mm PTV margin for a large bladder and a 5-mm PTV margin for a small bladder reduce the effective dose to the bladder wall to similar degrees. However, larger bladders were associated with evident protection of the intestines. Detailed dosimetry results can be used by radiation oncologists to create more accurate, individual water preload protocols according to the patient’s anatomy and bladder capacity. PMID:27441944

  7. Successful pregnancies with directional freezing of large volume buck semen.

    PubMed

    Gacitua, H; Arav, A

    2005-02-01

    Artificial insemination with frozen-thawed buck semen shows variable results which depend on many factors related to semen quality and the cryopreservation processing. We conducted experiments based on a new freezing method, directional freezing, of large volumes (8 ml). In the first experiment semen from three Saanen bucks, ages 1-2-years-old and genetically selected for milk improvement, was frozen individually. Two to three-years-old Saanen females (n = 164) were synchronized with controlled internal drug release (CIDR), pregnant mare serum gonadotrophin (PMSG) and prostaglandin. Double cervical inseminations were performed with frozen-thawed semen and fresh semen as control. In the second experiment we used pooled, washed frozen semen to examine the effect of washed seminal plasma. The motility after washing was 80-90% and after thawing was 55-65% for all bucks. The sperm concentration increased with the collections and the advance into the breeding season from 1.9 x 10(9) to 4.4 x 10(9) cell/ml average. Two inseminations were carried out at 8h intervals. The first insemination was performed at 32 h after CIDR withdrawal with fresh and frozen-thawed semen. Pregnancy rates were assessed by ultrasonography conducted 40 and 90 days post-insemination (from three bucks). Results were 58, 67, 50% with fresh semen, and for frozen semen were 33, 37 and 53%; these results were significantly different in one of the three bucks (P < 0.005). In the second experiment with pooled, washed semen the pregnancy rate was 41.6%, which compared with the average results of the frozen semen in the first experiment 38.9% no significant difference was found. We conclude that freezing buck semen in large volumes (8 ml) is possible. Cryobanking of buck semen will facilitate a genetic breeding program in goats and preservation of biodiversity. Washed semen did not improve the fertility of the semen when Andromed bull extender is used. PMID:15629809

  8. Computer simulation of preflight blood volume reduction as a countermeasure to fluid shifts in space flight

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R.; Charles, J. B.

    1992-01-01

    Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.

  9. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  10. Interactive stereoscopic visualization of large-scale astrophysical simulations

    NASA Astrophysics Data System (ADS)

    Kaehler, Ralf; Abel, Tom

    2012-03-01

    In the last decades three-dimensional, time-dependent numerical simulations have become a standard tool in astrophysics and cosmology. This gave rise to a growing demand for analysis methods that are tailored to this type of simulation data, for example high-quality visualization approaches such as direct volume rendering and the display of stream lines. The modelled phenomena in numerical astrophysics usually involve complex spatial and temporal structures, and stereoscopic display techniques have proven to be particularly beneficial to clarify the spatial relationships of the relevant features. In this paper we present a flexible software framework for interactive stereoscopic visualizations of large time-dependent, three-dimensional astrophysical and cosmological simulation datasets. It is designed to enable fast and intuitive creation of complete rendering workflows, from importing datasets, the definition of various parameters, including camera paths and stereoscopic settings, to the storage of the final images in various output formats. It leverages the power of modern graphics processing units (GPUs) and supports high-quality floating-point precision throughout the whole rendering pipeline. All functionality is scriptable through Javascript. We give several application examples, including sequences produced for a number of planetarium shows.

  11. A large volume flat coil probe for oriented membrane proteins.

    PubMed

    Gor'kov, Peter L; Chekmenev, Eduard Y; Fu, Riqiang; Hu, Jun; Cross, Timothy A; Cotten, Myriam; Brey, William W

    2006-07-01

    15N detection of mechanically aligned membrane proteins benefits from large sample volumes that compensate for the low sensitivity of the observe nuclei, dilute sample preparation, and for the poor filling factor arising from the presence of alignment plates. Use of larger multi-tuned solenoids, however, is limited by wavelength effects that lead to inhomogeneous RF fields across the sample, complicating cross-polarization experiments. We describe a 600 MHz 15N-1H solid-state NMR probe with large (580 mm3) RF solenoid for high-power, multi-pulse sequence experiments, such as polarization inversion spin exchange at the magic angle (PISEMA). In order to provide efficient detection for 15N, a 4-turn solenoidal sample coil is used that exceeds 0.27 lambda at the 600 MHz 1H resonance. A balanced tuning-matching circuit is employed to preserve RF homogeneity across the sample for adequate magnetization transfer from 1H to 15N. We describe a procedure for optimization of the shorted 1/4 lambda coaxial trap that allows for the sufficiently strong RF fields in both 1H and 15N channels to be achieved within the power limits of 300 W 1H and 1 kW 15N amplifiers. The 8 x 6 x 12 mm solenoid sustains simultaneous B1 irradiation of 100 kHz at 1H frequency and 51 kHz at 15N frequency for at least 5 ms with 265 and 700 W of input power in the respective channels. The probe functionality is demonstrated by 2D 15N-1H PISEMA spectroscopy for two applications at 600 MHz. PMID:16580852

  12. Novel multi-slit large-volume air sampler.

    PubMed

    Buchanan, L M; Decker, H M; Frisque, D E; Phillips, C R; Dahlgren, C M

    1968-08-01

    Scientific investigators who are interested in the various facets of airborne transmission of disease in research laboratories and hospitals need a simple, continuous, high-volume sampling device that will recover a high percentage of viable microorganisms from the atmosphere. Such a device must sample a large quantity of air. It should effect direct transfer of the air into an all-purpose liquid medium in order to collect bacteria, viruses, rickettsia, and fungi, and it should be easy to use. A simple multi-slit impinger sampler that fulfills these requirements has been developed. It operates at an air-sampling rate of 500 liters/min, has a high collection efficiency, functions at a low pressure drop, and, in contrast to some earlier instruments, does not depend upon electrostatic precipitation at high voltages. When compared to the all-glass impinger, the multi-slit impinger sampler collected microbial aerosols of Serratia marcescens at 82% efficiency, and aerosols of Bacillus subtilis var. niger at 78% efficiency. PMID:4970892

  13. Simulation of Large-Scale HPC Architectures

    SciTech Connect

    Jones, Ian S; Engelmann, Christian

    2011-01-01

    The Extreme-scale Simulator (xSim) is a recently developed performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads. It allows observing parallel application performance properties in a simulated extreme-scale HPC system to further assist in HPC hardware and application software co-design on the road toward multi-petascale and exascale computing. This paper presents a newly implemented network model for the xSim performance investigation toolkit that is capable of providing simulation support for a variety of HPC network architectures with the appropriate trade-off between simulation scalability and accuracy. The taken approach focuses on a scalable distributed solution with latency and bandwidth restrictions for the simulated network. Different network architectures, such as star, ring, mesh, torus, twisted torus and tree, as well as hierarchical combinations, such as to simulate network-on-chip and network-on-node, are supported. Network traffic congestion modeling is omitted to gain simulation scalability by reducing simulation accuracy.

  14. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE PAGESBeta

    Ebrahimi, F.; Raman, R.

    2016-03-23

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form amore » narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. Furthermore, these results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.« less

  15. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    NASA Astrophysics Data System (ADS)

    Ebrahimi, F.; Raman, R.

    2016-04-01

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. These results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  16. Simulation of large systems with neural networks

    SciTech Connect

    Paez, T.L.

    1994-09-01

    Artificial neural networks (ANNs) have been shown capable of simulating the behavior of complex, nonlinear, systems, including structural systems. Under certain circumstances, it is desirable to simulate structures that are analyzed with the finite element method. For example, when we perform a probabilistic analysis with the Monte Carlo method, we usually perform numerous (hundreds or thousands of) repetitions of a response simulation with different input and system parameters to estimate the chance of specific response behaviors. In such applications, efficiency in computation of response is critical, and response simulation with ANNs can be valuable. However, finite element analyses of complex systems involve the use of models with tens or hundreds of thousands of degrees of freedom, and ANNs are practically limited to simulations that involve far fewer variables. This paper develops a technique for reducing the amount of information required to characterize the response of a general structure. We show how the reduced information can be used to train a recurrent ANN. Then the trained ANN can be used to simulate the reduced behavior of the original system, and the reduction transformation can be inverted to provide a simulation of the original system. A numerical example is presented.

  17. Description and characterization of a novel method for partial volume simulation in software breast phantoms.

    PubMed

    Chen, Feiyu; Bakic, Predrag R; Maidment, Andrew D A; Jensen, Shane T; Shi, Xiquan; Pokrajac, David D

    2015-10-01

    A modification to our previous simulation of breast anatomy is proposed to improve the quality of simulated x-ray projections images. The image quality is affected by the voxel size of the simulation. Large voxels can cause notable spatial quantization artifacts; small voxels extend the generation time and increase the memory requirements. An improvement in image quality is achievable without reducing voxel size by the simulation of partial volume averaging in which voxels containing more than one simulated tissue type are allowed. The linear x-ray attenuation coefficient of voxels is, thus, the sum of the linear attenuation coefficients weighted by the voxel subvolume occupied by each tissue type. A local planar approximation of the boundary surface is employed. In the two-material case, the partial volume in each voxel is computed by decomposition into up to four simple geometric shapes. In the three-material case, by application of the Gauss-Ostrogradsky theorem, the 3D partial volume problem is converted into one of a few simpler 2D surface area problems. We illustrate the benefits of the proposed methodology on simulated x-ray projections. An efficient encoding scheme is proposed for the type and proportion of simulated tissues in each voxel. Monte Carlo simulation was used to evaluate the quantitative error of our approximation algorithms. PMID:25910056

  18. Fluorescence volume imaging with an axicon: simulation study based on scalar diffraction method.

    PubMed

    Zheng, Juanjuan; Yang, Yanlong; Lei, Ming; Yao, Baoli; Gao, Peng; Ye, Tong

    2012-10-20

    In a two-photon excitation fluorescence volume imaging (TPFVI) system, an axicon is used to generate a Bessel beam and at the same time to collect the generated fluorescence to achieve large depth of field. A slice-by-slice diffraction propagation model in the frame of the angular spectrum method is proposed to simulate the whole imaging process of TPFVI. The simulation reveals that the Bessel beam can penetrate deep in scattering media due to its self-reconstruction ability. The simulation also demonstrates that TPFVI can image a volume of interest in a single raster scan. Two-photon excitation is crucial to eliminate the signals that are generated by the side lobes of Bessel beams; the unwanted signals may be further suppressed by placing a spatial filter in the front of the detector. The simulation method will guide the system design in improving the performance of a TPFVI system. PMID:23089777

  19. Large-scale mass distribution in the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Haider, M.; Steinhauser, D.; Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Hernquist, L.

    2016-04-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 per cent of the dark matter and 23 per cent of the baryons are within haloes more massive than the resolution limit of 2 × 108 M⊙. The filaments of the cosmic web host a further 45 per cent of the dark matter and 46 per cent of the baryons. The remaining 31 per cent of the baryons reside in voids. The majority of these baryons have been transported there through active galactic nuclei feedback. We note that the feedback model of Illustris is too strong for heavy haloes, therefore it is likely that we are overestimating this amount. Categorizing the baryons according to their density and temperature, we find that 17.8 per cent of them are in a condensed state, 21.6 per cent are present as cold, diffuse gas, and 53.9 per cent are found in the state of a warm-hot intergalactic medium.

  20. Aeronautical facilities catalogue. Volume 2: Airbreathing propulsion and flight simulators

    NASA Technical Reports Server (NTRS)

    Penaranda, F. E.; Freda, M. S.

    1985-01-01

    Volume two of the facilities catalogue deals with Airbreathing Propulsion and Flight Simulation Facilities. Data pertinent to managers and engineers are presented. Each facility is described on a data sheet that shows the facility's technical parameters on a chart and more detailed information in narratives. Facilities judged comparable in testing capability are noted and grouped together. Several comprehensive cross-indexes and charts are included.

  1. Surface identification, meshing and analysis during large molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Dupuy, Laurent M.; Rudd, Robert E.

    2006-03-01

    Techniques are presented for the identification and analysis of surfaces and interfaces in atomistic simulations of solids. Atomistic and other particle-based simulations have no inherent notion of a surface, only atomic positions and interactions. The algorithms we develop here provide an unambiguous means to determine which atoms constitute the surface, and the list of surface atoms and a tessellation (meshing) of the surface are determined simultaneously. The tessellation is then used to calculate various surface integrals such as volume, area and shape (multiple moment). The principle of surface identification and tessellation is closely related to that used in the generation of the r-reduced surface, a step in the visualization of molecular surfaces used in biology. The algorithms have been implemented and demonstrated to run automatically (on the fly) in a large-scale parallel molecular dynamics (MD) code on a supercomputer. We demonstrate the validity of the method in three applications in which the surfaces and interfaces evolve: void surfaces in ductile fracture, the surface morphology due to significant plastic deformation of a nanoscale metal plate, and the interfaces (grain boundaries) and void surfaces in a nanoscale polycrystalline system undergoing ductile failure. The technique is found to be quite robust, even when the topology of the surfaces changes as in the case of void coalescence where two surfaces merge into one. It is found to add negligible computational overhead to an MD code.

  2. The UPSCALE project: a large simulation campaign

    NASA Astrophysics Data System (ADS)

    Mizielinski, Matthew; Roberts, Malcolm; Vidale, Pier Luigi; Schiemann, Reinhard; Demory, Marie-Estelle; Strachan, Jane

    2014-05-01

    The development of a traceable hierarchy of HadGEM3 global climate models, based upon the Met Office Unified Model, at resolutions from 135 km to 25 km, now allows the impact of resolution on the mean state, variability and extremes of climate to be studied in a robust fashion. In 2011 we successfully obtained a single-year grant of 144 million core hours of supercomputing time from the PRACE organization to run ensembles of 27 year atmosphere-only (HadGEM3-A GA3.0) climate simulations at 25km resolution, as used in present global weather forecasting, on HERMIT at HLRS. Through 2012 the UPSCALE project (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) ran over 650 years of simulation at resolutions of 25 km (N512), 60 km (N216) and 135 km (N96) to look at the value of high resolution climate models in the study of both present climate and a potential future climate scenario based on RCP8.5. Over 400 TB of data was produced using HERMIT, with additional simulations run on HECToR (UK supercomputer) and MONSooN (Met Office NERC Supercomputing Node). The data generated was transferred to the JASMIN super-data cluster, hosted by STFC CEDA in the UK, where analysis facilities are allowing rapid scientific exploitation of the data set. Many groups across the UK and Europe are already taking advantage of these facilities and we welcome approaches from other interested scientists. This presentation will briefly cover the following points; Purpose and requirements of the UPSCALE project and facilities used. Technical implementation and hurdles (model porting and optimisation, automation, numerical failures, data transfer). Ensemble specification. Current analysis projects and access to the data set. A full description of UPSCALE and the data set generated has been submitted to Geoscientific Model development, with overview information available from http://proj.badc.rl.ac.uk/upscale .

  3. Entropic effects in large-scale Monte Carlo simulations.

    PubMed

    Predescu, Cristian

    2007-07-01

    The efficiency of Monte Carlo samplers is dictated not only by energetic effects, such as large barriers, but also by entropic effects that are due to the sheer volume that is sampled. The latter effects appear in the form of an entropic mismatch or divergence between the direct and reverse trial moves. We provide lower and upper bounds for the average acceptance probability in terms of the Rényi divergence of order 1/2 . We show that the asymptotic finitude of the entropic divergence is the necessary and sufficient condition for nonvanishing acceptance probabilities in the limit of large dimension. Furthermore, we demonstrate that the upper bound is reasonably tight by showing that the exponent is asymptotically exact for systems made up of a large number of independent and identically distributed subsystems. For the last statement, we provide an alternative proof that relies on the reformulation of the acceptance probability as a large deviation problem. The reformulation also leads to a class of low-variance estimators for strongly asymmetric distributions. We show that the entropy divergence causes a decay in the average displacements with the number of dimensions n that are simultaneously updated. For systems that have a well-defined thermodynamic limit, the decay is demonstrated to be n(-1/2) for random-walk Monte Carlo and n(-1/6) for smart Monte Carlo (SMC). Numerical simulations of the Lennard-Jones 38 (LJ(38)) cluster show that SMC is virtually as efficient as the Markov chain implementation of the Gibbs sampler, which is normally utilized for Lennard-Jones clusters. An application of the entropic inequalities to the parallel tempering method demonstrates that the number of replicas increases as the square root of the heat capacity of the system. PMID:17677591

  4. Simulating stochastic dynamics using large time steps.

    PubMed

    Corradini, O; Faccioli, P; Orland, H

    2009-12-01

    We present an approach to investigate the long-time stochastic dynamics of multidimensional classical systems, in contact with a heat bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short- and long-time scales and both molecular dynamics or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time-resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor approximately 100 larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape. PMID:20365123

  5. Resonant RF network antennas for large-area and large-volume inductively coupled plasma sources

    NASA Astrophysics Data System (ADS)

    Hollenstein, Ch; Guittienne, Ph; Howling, A. A.

    2013-10-01

    Large-area and large-volume radio frequency (RF) plasmas are produced by different arrangements of an elementary electrical mesh consisting of two conductors interconnected by a capacitor at each end. The obtained cylindrical and planar RF networks are resonant and generate very high RF currents. The input impedance of such RF networks shows the behaviour of an RLC parallel resonance equivalent circuit. The real impedance at the resonance frequency is of great advantage for power matching compared with conventional inductive devices. Changes in the RLC equivalent circuit during the observed E-H transition will allow future interpretation of the plasma-antenna coupling. Furthermore, high power transfer efficiencies are found during inductively coupled plasma (ICP) operation. For the planar RF antenna network it is shown that the E-H transition occurs simultaneously over the entire antenna. The underlying physics of these discharges induced by the resonant RF network antenna is found to be identical to that of the conventional ICP devices described in the literature. The resonant RF network antenna is a new versatile plasma source, which can be adapted to applications in industry and research.

  6. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.

  7. The 1980 Large space systems technology. Volume 2: Base technology

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    Technology pertinent to large antenna systems, technology related to large space platform systems, and base technology applicable to both antenna and platform systems are discussed. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. A total systems approach including controls, platforms, and antennas is presented as a cohesive, programmatic plan for large space systems.

  8. An Ultrascalable Solution to Large-scale Neural Tissue Simulation.

    PubMed

    Kozloski, James; Wagner, John

    2011-01-01

    Neural tissue simulation extends requirements and constraints of previous neuronal and neural circuit simulation methods, creating a tissue coordinate system. We have developed a novel tissue volume decomposition, and a hybrid branched cable equation solver. The decomposition divides the simulation into regular tissue blocks and distributes them on a parallel multithreaded machine. The solver computes neurons that have been divided arbitrarily across blocks. We demonstrate thread, strong, and weak scaling of our approach on a machine with more than 4000 nodes and up to four threads per node. Scaling synapses to physiological numbers had little effect on performance, since our decomposition approach generates synapses that are almost always computed locally. The largest simulation included in our scaling results comprised 1 million neurons, 1 billion compartments, and 10 billion conductance-based synapses and gap junctions. We discuss the implications of our ultrascalable Neural Tissue Simulator, and with our results estimate requirements for a simulation at the scale of a human brain. PMID:21954383

  9. An Ultrascalable Solution to Large-scale Neural Tissue Simulation

    PubMed Central

    Kozloski, James; Wagner, John

    2011-01-01

    Neural tissue simulation extends requirements and constraints of previous neuronal and neural circuit simulation methods, creating a tissue coordinate system. We have developed a novel tissue volume decomposition, and a hybrid branched cable equation solver. The decomposition divides the simulation into regular tissue blocks and distributes them on a parallel multithreaded machine. The solver computes neurons that have been divided arbitrarily across blocks. We demonstrate thread, strong, and weak scaling of our approach on a machine with more than 4000 nodes and up to four threads per node. Scaling synapses to physiological numbers had little effect on performance, since our decomposition approach generates synapses that are almost always computed locally. The largest simulation included in our scaling results comprised 1 million neurons, 1 billion compartments, and 10 billion conductance-based synapses and gap junctions. We discuss the implications of our ultrascalable Neural Tissue Simulator, and with our results estimate requirements for a simulation at the scale of a human brain. PMID:21954383

  10. Large Eddy Simulation of a Sooting Jet Diffusion Flame

    NASA Astrophysics Data System (ADS)

    Blanquart, Guillaume; Pitsch, Heinz

    2007-11-01

    The understanding of soot particle dynamics in combustion systems is a key issue in the development of low emission engines. Of particular importance are the processes shaping the soot particle size distribution function (PSDF). However, it is not always necessary to represent exactly the full distribution, and often information about its moments only is sufficient. The Direct Quadrature Method of Moments (DQMOM) allows for an efficient and accurate prediction of the moments of the soot PSDF. This method has been validated for laminar premixed and diffusion flames with detailed chemistry and is now implemented in a semi-implicit low Mach-number Navier-Stokes solver. A Large Eddy Simulation (LES) of a piloted sooting jet diffusion flame (Delft flame) is performed to study the dynamics of soot particles in a turbulent environment. The profiles of temperature and major species are compared with the experimental measurements. Soot volume fraction profiles are compared with the recent data of Qamar et al. (2007). Aggregate properties such as the diameter and the fractal shape are studied in the scope of DQMOM.

  11. Large Eddy Simulation of Crashback in Marine Propulsors

    NASA Astrophysics Data System (ADS)

    Jang, Hyunchul

    Crashback is an operating condition to quickly stop a propelled vehicle, where the propeller is rotated in the reverse direction to yield negative thrust. The crashback condition is dominated by the interaction of the free stream flow with the strong reverse flow. This interaction forms a highly unsteady vortex ring, which is a very prominent feature of crashback. Crashback causes highly unsteady loads and flow separation on the blade surface. The unsteady loads can cause propulsor blade damage, and also affect vehicle maneuverability. Crashback is therefore well known as one of the most challenging propeller states to analyze. This dissertation uses Large-Eddy Simulation (LES) to predict the highly unsteady flow field in crashback. A non-dissipative and robust finite volume method developed by Mahesh et al. (2004) for unstructured grids is applied to flow around marine propulsors. The LES equations are written in a rotating frame of reference. The objectives of this dissertation are: (1) to understand the flow physics of crashback in marine propulsors with and without a duct, (2) to develop a finite volume method for highly skewed meshes which usually occur in complex propulsor geometries, and (3) to develop a sliding interface method for simulations of rotor-stator propulsor on parallel platforms. LES is performed for an open propulsor in crashback and validated against experiments performed by Jessup et al. (2004). The LES results show good agreement with experiments. Effective pressures for thrust and side-force are introduced to more clearly understand the physical sources of thrust and side-force. Both thrust and side-force are seen to be mainly generated from the leading edge of the suction side of the propeller. This implies that thrust and side-force have the same source---the highly unsteady leading edge separation. Conditional averaging is performed to obtain quantitative information about the complex flow physics of high- or low-amplitude events. The

  12. Performance of large electron energy filter in large volume plasma device

    SciTech Connect

    Singh, S. K.; Srivastava, P. K.; Awasthi, L. M.; Mattoo, S. K.; Sanyasi, A. K.; Kaw, P. K.; Singh, R.

    2014-03-15

    This paper describes an in-house designed large Electron Energy Filter (EEF) utilized in the Large Volume Plasma Device (LVPD) [S. K. Mattoo, V. P. Anita, L. M. Awasthi, and G. Ravi, Rev. Sci. Instrum. 72, 3864 (2001)] to secure objectives of (a) removing the presence of remnant primary ionizing energetic electrons and the non-thermal electrons, (b) introducing a radial gradient in plasma electron temperature without greatly affecting the radial profile of plasma density, and (c) providing a control on the scale length of gradient in electron temperature. A set of 19 independent coils of EEF make a variable aspect ratio, rectangular solenoid producing a magnetic field (B{sub x}) of 100 G along its axis and transverse to the ambient axial field (B{sub z} ∼ 6.2 G) of LVPD, when all its coils are used. Outside the EEF, magnetic field reduces rapidly to 1 G at a distance of 20 cm from the center of the solenoid on either side of target and source plasma. The EEF divides LVPD plasma into three distinct regions of source, EEF and target plasma. We report that the target plasma (n{sub e} ∼ 2 × 10{sup 11} cm{sup −3} and T{sub e} ∼ 2 eV) has no detectable energetic electrons and the radial gradients in its electron temperature can be established with scale length between 50 and 600 cm by controlling EEF magnetic field. Our observations reveal that the role of the EEF magnetic field is manifested by the energy dependence of transverse electron transport and enhanced transport caused by the plasma turbulence in the EEF plasma.

  13. WEST-3 wind turbine simulator development: Volume 3, Software

    SciTech Connect

    Hoffman, J.A.; Sridhar, S.

    1985-07-01

    This report deals with the software developed for WEST-3, a new, all digital, and fully programmable wind turbine simulator developed by Paragon Pacific Inc. The process of wind turbine simulation on WEST-3 is described in detail. The major steps are, the processing of the mathematical models, the preparation of the constant data, and the use of system software to generate executable code for running on WEST-3. The mechanics of reformulation, normalization, and scaling of the mathematical models is discussed in detail, in particular, the significance of reformulation which leads to accurate simulations. Descriptions for the preprocessor computer programs which are used to prepare the constant data needed in the simulation are given. These programs, in addition to scaling and normalizing all the constants, relieve the user from having to generate a large number of constants used in the simulation. Also given in the report are brief descriptions of the components of the WEST-3 system software: Translator, Assembler, Linker, and Loader. Also included in the report are: details of the aeroelastic rotor analysis, which is the center piece of a wind turbine simulation model; analysis of the gimbal subsystem; and listings of the variables, constants, and equations used in the simulation.

  14. Large-Eddy Simulation of Wind-Plant Aerodynamics

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.

  15. Climate Simulations with an Isentropic Finite Volume Dynamical Core

    SciTech Connect

    Chen, Chih-Chieh; Rasch, Philip J.

    2012-04-15

    This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite volume dynamical core of the Community Atmosphere Model (CAM). Results from a 20-year climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM which both use a pressure-based ({sigma}-p) coordinate. The same physical parameterization package is employed in all three dynamical cores. The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heat transport processes in the atmosphere leading to much improved atmospheric temperatures. We show that the isentropic model is very effective in reducing the long standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea-level pressure field in the northern hemisphere.

  16. Numerical simulation of the decay of swirling flow in a constant volume engine simulator

    SciTech Connect

    Cloutman, L.D.

    1986-05-01

    The KIVA and COYOTE computer programs were used to simulate the decay of turbulent swirling flow in a constant-volume combustion bomb. The results are in satisfactory agreement with the measurement of both swirl velocity and temperature. Predictions of secondary flows and suggestions for future research also are presented. 14 refs., 15 figs.

  17. Numerical simulation of the decay of swirling flow in a constant volume engine simulator

    NASA Astrophysics Data System (ADS)

    Cloutman, Lawrence D.

    1986-05-01

    The KIVA and COYOTE computer programs were used to simulate the decay of turbulent swirling flow in a constant-volume combustion bomb. The results are in satisfactory agreement with the measurement of both swirl velocity and temperature. Predictions of secondary flows and suggestions for future research also are presented.

  18. Development of large volume double ring penning plasma discharge source for efficient light emissions

    SciTech Connect

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj; Prajapati, Jitendra; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana

    2012-12-15

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd{sub 2}Fe{sub 14}B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic field of {approx}0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is {approx}2 Multiplication-Sign 10{sup 11} cm{sup -3}, which is around one order of magnitude larger than that of single ring arrangement.

  19. Testbed for large volume surveillance through distributed fusion and resource management

    NASA Astrophysics Data System (ADS)

    Valin, Pierre; Guitouni, Adel; Bossé, Éloi; Wehn, Hans; Yates, Richard; Zwick, Harold

    2007-04-01

    DRDC Valcartier has initiated, through a PRECARN partnership project, the development of an advanced simulation testbed for the evaluation of the effectiveness of Network Enabled Operations in a coastal large volume surveillance situation. The main focus of this testbed is to study concepts like distributed information fusion, dynamic resources and networks configuration management, and self synchronising units and agents. This article presents the requirements, design and first implementation builds, and reports on some preliminary results. The testbed allows to model distributed nodes performing information fusion, dynamic resource management planning and scheduling, as well as configuration management, given multiple constraints on the resources and their communications networks. Two situations are simulated: cooperative and non-cooperative target search. A cooperative surface target behaves in ways to be detected (and rescued), while an elusive target attempts to avoid detection. The current simulation consists of a networked set of surveillance assets including aircraft (UAVs, helicopters, maritime patrol aircraft), and ships. These assets have electrooptical and infrared sensors, scanning and imaging radar capabilities. Since full data sharing over datalinks is not feasible, own-platform data fusion must be simulated to evaluate implementation and performance of distributed information fusion. A special emphasis is put on higher-level fusion concepts using knowledge-based rules, with level 1 fusion already providing tracks. Surveillance platform behavior is also simulated in order to evaluate different dynamic resource management algorithms. Additionally, communication networks are modeled to simulate different information exchange concepts. The testbed allows the evaluation of a range of control strategies from independent platform search, through various levels of platform collaboration, up to a centralized control of search platforms.

  20. Large volume liquid helium relief device verifacation apparatus for the alpha magnetic spectrometer

    NASA Astrophysics Data System (ADS)

    Klimas, Richard John; McIntyre, P.; Colvin, John; Zeigler, John; Van Sciver, Steven; Ting, Samual

    2012-06-01

    Here we present details of an experiment for verifying the liquid helium vessel relief device for the Alpha Magnetic Spectrometer-02 (AMS-02). The relief device utilizes a series of rupture discs designed to open in the event of a vacuum failure of the AMS-02 cryogenic system. A failure of this type is classified to be a catastrophic loss of insulating vacuum accident. This apparatus differs from other approaches due to the size of the test volumes used. The verification apparatus consists of a 250 liter vessel used for the test quantity of liquid helium that is located inside a vacuum insulated vessel. A large diameter valve is suddenly opened to simulate the loss of insulating vacuum in a repeatable manner. Pressure and temperature vs. time data are presented and discussed in the context of the AMS-02 hardware configuration.

  1. Large space telescope, phase A. Volume 3: Optical telescope assembly

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the optical telescope assembly for the Large Space Telescope are discussed. The systems considerations are based on mission-related parameters and optical equipment requirements. Information is included on: (1) structural design and analysis, (2) thermal design, (3) stabilization and control, (4) alignment, focus, and figure control, (5) electronic subsystem, and (6) scientific instrument design.

  2. Large space telescope, phase A. Volume 5: Support systems module

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the support systems module for the Large Space Telescope are discussed. The following systems and described: (1) thermal control, (2) electrical, (3) communication and data landing, (4) attitude control system, and (5) structural features. Analyses of maintainability and reliability considerations are included.

  3. Large space telescope, phase A. Volume 4: Scientific instrument package

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design and characteristics of the scientific instrument package for the Large Space Telescope are discussed. The subjects include: (1) general scientific objectives, (2) package system analysis, (3) scientific instrumentation, (4) imaging photoelectric sensors, (5) environmental considerations, and (6) reliability and maintainability.

  4. RADON DIAGNOSTIC MEASUREMENT GUIDANCE FOR LARGE BUILDINGS - VOLUME 2. APPENDICES

    EPA Science Inventory

    The report discusses the development of radon diagnostic procedures and mitigation strategies applicable to a variety of large non-residential buildings commonly found in Florida. The investigations document and evaluate the nature of radon occurrence and entry mechanisms for rad...

  5. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  6. Real-time visualization of large volume datasets on standard PC hardware.

    PubMed

    Xie, Kai; Yang, Jie; Zhu, Y M

    2008-05-01

    In medical area, interactive three-dimensional volume visualization of large volume datasets is a challenging task. One of the major challenges in graphics processing unit (GPU)-based volume rendering algorithms is the limited size of texture memory imposed by current GPU architecture. We attempt to overcome this limitation by rendering only visible parts of large CT datasets. In this paper, we present an efficient, high-quality volume rendering algorithm using GPUs for rendering large CT datasets at interactive frame rates on standard PC hardware. We subdivide the volume dataset into uniform sized blocks and take advantage of combinations of early ray termination, empty-space skipping and visibility culling to accelerate the whole rendering process and render visible parts of volume data. We have implemented our volume rendering algorithm for a large volume data of 512 x 304 x 1878 dimensions (visible female), and achieved real-time performance (i.e., 3-4 frames per second) on a Pentium 4 2.4GHz PC equipped with NVIDIA Geforce 6600 graphics card ( 256 MB video memory). This method can be used as a 3D visualization tool of large CT datasets for doctors or radiologists. PMID:18243401

  7. Special Properties of Coherence Scanning Interferometers for large Measurement Volumes

    NASA Astrophysics Data System (ADS)

    Bauer, W.

    2011-08-01

    In contrast to many other optical methods the uncertainty of Coherence Scanning Interferometer (CSI) in vertical direction is independent from the field of view. Therefore CSIs are ideal instruments for measuring 3D-profiles of larger areas (36×28mm2, e.g.) with high precision. This is of advantage for the determination of form parameters like flatness, parallelism and steps heights within a short time. In addition, using a telecentric beam path allows measurements of deep lying surfaces (<70mm) and the determination of form parameters with large step-heights. The lateral and spatial resolution, however, are reduced. In this presentation different metrological characteristics together with their potential errors are analyzed for large-scale measuring CSIs. Therefore these instruments are ideal tools in quality control for good/bad selections, e.g. The consequences for the practical use in industry and for standardization are discussed by examples of workpieces of automotive suppliers or from the steel industry.

  8. Modelling and simulation of large solid state laser systems

    SciTech Connect

    Simmons, W.W.; Warren, W.E.

    1986-01-01

    The role of numerical methods to simulate the several physical processes (e.g., diffraction, self-focusing, gain saturation) that are involved in coherent beam propagation through large laser systems is discussed. A comprehensive simulation code for modeling the pertinent physical phenomena observed in laser operations (growth of small-scale modulation, spatial filter, imaging, gain saturation and beam-induced damage) is described in some detail. Comparisons between code results and solid state laser output performance data are presented. Design and performance estimation of the large Nova laser system at LLNL are given. Finally, a global design rule for large, solid state laser systems is discussed.

  9. Probing the Earth’s interior with a large-volume liquid scintillator detector

    NASA Astrophysics Data System (ADS)

    Hochmuth, Kathrin A.; Feilitzsch, Franz V.; Fields, Brian D.; Undagoitia, Teresa Marrodán; Oberauer, Lothar; Potzel, Walter; Raffelt, Georg G.; Wurm, Michael

    2007-02-01

    A future large-volume liquid scintillator detector would provide a high-statistics measurement of terrestrial antineutrinos originating from β-decays of the uranium and thorium chains. In addition, the forward displacement of the neutron in the detection reaction ν+p→n+e provides directional information. We investigate the requirements on such detectors to distinguish between certain geophysical models on the basis of the angular dependence of the geoneutrino flux. Our analysis is based on a Monte-Carlo simulation with different levels of light yield, considering both unloaded and gadolinium-loaded scintillators. We find that a 50 kt detector such as the proposed LENA (Low Energy Neutrino Astronomy) will detect deviations from isotropy of the geoneutrino flux significantly. However, with an unloaded scintillator the time needed for a useful discrimination between different geophysical models is too large if one uses the directional information alone. A Gd-loaded scintillator improves the situation considerably, although a 50 kt detector would still need several decades to distinguish between a geophysical reference model and one with a large neutrino source in the Earth’s core. However, a high-statistics measurement of the total geoneutrino flux and its spectrum still provides an extremely useful glance at the Earth’s interior.

  10. Evaluation of Large Volume SrI2(Eu) Scintillator Detectors

    SciTech Connect

    Sturm, B W; Cherepy, N J; Drury, O B; Thelin, P A; Fisher, S E; Magyar, A F; Payne, S A; Burger, A; Boatner, L A; Ramey, J O; Shah, K S; Hawrami, R

    2010-11-18

    There is an ever increasing demand for gamma-ray detectors which can achieve good energy resolution, high detection efficiency, and room-temperature operation. We are working to address each of these requirements through the development of large volume SrI{sub 2}(Eu) scintillator detectors. In this work, we have evaluated a variety of SrI{sub 2} crystals with volumes >10 cm{sup 3}. The goal of this research was to examine the causes of energy resolution degradation for larger detectors and to determine what can be done to mitigate these effects. Testing both packaged and unpackaged detectors, we have consistently achieved better resolution with the packaged detectors. Using a collimated gamma-ray source, it was determined that better energy resolution for the packaged detectors is correlated with better light collection uniformity. A number of packaged detectors were fabricated and tested and the best spectroscopic performance was achieved for a 3% Eu doped crystal with an energy resolution of 2.93% FWHM at 662keV. Simulations of SrI{sub 2}(Eu) crystals were also performed to better understand the light transport physics in scintillators and are reported. This study has important implications for the development of SrI{sub 2}(Eu) detectors for national security purposes.

  11. New material model for simulating large impacts on rocky bodies

    NASA Astrophysics Data System (ADS)

    Tonge, A.; Barnouin, O.; Ramesh, K.

    2014-07-01

    Large impact craters on an asteroid can provide insights into its internal structure. These craters can expose material from the interior of the body at the impact site [e.g., 1]; additionally, the impact sends stress waves throughout the body, which interrogate the asteroid's interior. Through a complex interplay of processes, such impacts can result in a variety of motions, the consequence of which may appear as lineaments that are exposed over all or portions of the asteroid's surface [e.g., 2,3]. While analytic, scaling, and heuristic arguments can provide some insight into general phenomena on asteroids, interpreting the results of a specific impact event, or series of events, on a specific asteroid geometry generally necessitates the use of computational approaches that can solve for the stress and displacement history resulting from an impact event. These computational approaches require a constitutive model for the material, which relates the deformation history of a small material volume to the average force on the boundary of that material volume. In this work, we present a new material model that is suitable for simulating the failure of rocky materials during impact events. This material model is similar to the model discussed in [4]. The new material model incorporates dynamic sub-scale crack interactions through a micro-mechanics-based damage model, thermodynamic effects through the use of a Mie-Gruneisen equation of state, and granular flow of the fully damaged material. The granular flow model includes dilatation resulting from the mutual interaction of small fragments of material (grains) as they are forced to slide and roll over each other and includes a P-α type porosity model to account for compaction of the granular material in a subsequent impact event. The micro-mechanics-based damage model provides a direct connection between the flaw (crack) distribution in the material and the rate-dependent strength. By connecting the rate

  12. Earthquake Source Simulations: A Coupled Numerical Method and Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Ely, G. P.; Xin, Q.; Faerman, M.; Day, S.; Minster, B.; Kremenek, G.; Moore, R.

    2003-12-01

    We investigate a scheme for interfacing Finite-Difference (FD) and Finite-Element (FE) models in order to simulate dynamic earthquake rupture. The more powerful but slower FE method allows for (1) unusual geometries (e.g. dipping and curved faults), (2) nonlinear physics, and (3) finite displacements. These capabilities are computationally expensive and limit the useful size of the problem that can be solved. Large efficiencies are gained by employing FE only where necessary in the near source region and coupling this with an efficient FD solution for the surrounding medium. Coupling is achieved through setting up and an overlapping buffer zone between the domains modeled by the two methods. The buffer zone is handled numerically as a set of mutual offset boundary conditions. This scheme eliminates the effect of the artificial boundaries at the interface and allows energy to propagate in both directions across the boundary. In general it is necessary to interpolate variables between the meshes and time discretizations used for each model, and this can create artifacts that must be controlled. A modular approach has been used in which either of the two component codes can be substituted with another code. We have successfully demonstrated coupling for a simulation between a second-order FD rupture dynamics code and fourth-order staggered-grid FD code. To be useful earthquake source models must capture a large range of length and time scales, which is very computationally demanding. This requires that (for current computer technology) codes must utilize parallel processing. Additionally, if larges quantities of output data are to be saved, a high performance data management system is desirable. We show results from a large scale rupture dynamics simulation designed to test these capabilities. We use second-order FD with dimensions of 400 x 800 x 800 nodes, run for 3000 time steps. Data were saved for the entire volume for three components of velocity at every time

  13. Large-volume protein crystal growth for neutron macromolecular crystallography.

    PubMed

    Ng, Joseph D; Baird, James K; Coates, Leighton; Garcia-Ruiz, Juan M; Hodge, Teresa A; Huang, Sijay

    2015-04-01

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. These include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations. PMID:25849493

  14. Large-volume protein crystal growth for neutron macromolecular crystallography

    SciTech Connect

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.

  15. Large-volume protein crystal growth for neutron macromolecular crystallography

    DOE PAGESBeta

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for themore » growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.« less

  16. Large eddy simulations of a forced semiconfined circular impinging jet

    NASA Astrophysics Data System (ADS)

    Olsson, M.; Fuchs, L.

    1998-02-01

    Large eddy simulations (LES) of a forced semiconfined circular impinging jet were carried out. The Reynolds number was 104 and the inflow was forced at a Strouhal number of 0.27. The separation between the jet inlet and the opposing wall was four jet inlet diameters. Four different simulations were made. Two simulations were performed without any explicit sub-grid-scale (SGS) model using 1283 and 963 grid points, respectively. Two simulations were performed with two different SGS-models using 963 grid points; one with a dynamic Smagorinsky based model and one with a stress-similarity model. The simulations were performed to study the mean velocity, the turbulence statistics, the SGS-model effects, the dynamic behavior of the jet with a focus on the near wall region. The existence of separation vortices in the wall jet region was confirmed. These secondary vortices were found to be related to the radially deflected primary vortices generated by the circular shear layer of the jet. It was also shown that the primary vortex structures that reach the wall were helical and not axisymmetric. A quantitative gain was found in the simulations with SGS-models. The stress-similarity model simulation correlated slightly better with the higher resolution simulation than the other coarse grid simulations. The variations in the results predicted by the different simulations were larger for the turbulence statistics than for the mean velocity. However, the variation among the different simulations in terms of the turbulence intensity was less than 10%.

  17. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  18. Large Eddy Simulation of Pollen Transport in the Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Chamecki, Marcelo; Meneveau, Charles; Parlange, Marc B.

    2007-11-01

    The development of genetically modified crops and questions about cross-pollination and contamination of natural plant populations enhanced the importance of understanding wind dispersion of airborne pollen. The main objective of this work is to simulate the dispersal of pollen grains in the atmospheric surface layer using large eddy simulation. Pollen concentrations are simulated by an advection-diffusion equation including gravitational settling. Of great importance is the specification of the bottom boundary conditions characterizing the pollen source over the canopy and the deposition process everywhere else. The velocity field is discretized using a pseudospectral approach. However the application of the same discretization scheme to the pollen equation generates unphysical solutions (i.e. negative concentrations). The finite-volume bounded scheme SMART is used for the pollen equation. A conservative interpolation scheme to determine the velocity field on the finite volume surfaces was developed. The implementation is validated against field experiments of point source and area field releases of pollen.

  19. Constrained Large Eddy Simulation of Separated Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Xia, Zhenhua; Shi, Yipeng; Wang, Jianchun; Xiao, Zuoli; Yang, Yantao; Chen, Shiyi

    2011-11-01

    Constrained Large-eddy Simulation (CLES) has been recently proposed to simulate turbulent flows with massive separation. Different from traditional large eddy simulation (LES) and hybrid RANS/LES approaches, the CLES simulates the whole flow domain by large eddy simulation while enforcing a RANS Reynolds stress constraint on the subgrid-scale (SGS) stress models in the near-wall region. Algebraic eddy-viscosity models and one-equation Spalart-Allmaras (S-A) model have been used to constrain the Reynolds stress. The CLES approach is validated a posteriori through simulation of flow past a circular cylinder and periodic hill flow at high Reynolds numbers. The simulation results are compared with those from RANS, DES, DDES and other available hybrid RANS/LES methods. It is shown that the capability of the CLES method in predicting separated flows is comparable to that of DES. Detailed discussions are also presented about the effects of the RANS models as constraint in the near-wall layers. Our results demonstrate that the CLES method is a promising alternative towards engineering applications.

  20. Sand tank experiment of a large volume biodiesel spill

    NASA Astrophysics Data System (ADS)

    Scully, K.; Mayer, K. U.

    2015-12-01

    Although petroleum hydrocarbon releases in the subsurface have been well studied, the impacts of subsurface releases of highly degradable alternative fuels, including biodiesel, are not as well understood. One concern is the generation of CH4­ which may lead to explosive conditions in underground structures. In addition, the biodegradation of biodiesel consumes O2 that would otherwise be available for the degradation of petroleum hydrocarbons that may be present at a site. Until now, biodiesel biodegradation in the vadose zone has not been examined in detail, despite being critical to understanding the full impact of a release. This research involves a detailed study of a laboratory release of 80 L of biodiesel applied at surface into a large sandtank to examine the progress of biodegradation reactions. The experiment will monitor the onset and temporal evolution of CH4 generation to provide guidance for site monitoring needs following a biodiesel release to the subsurface. Three CO2 and CH4 flux chambers have been deployed for long term monitoring of gas emissions. CO2 fluxes have increased in all chambers over the 126 days since the start of the experiment. The highest CO2 effluxes are found directly above the spill and have increased from < 0.5 μmol m-2 s-1 to ~3.8 μmol m-2 s-1, indicating an increase in microbial activity. There were no measurable CH4 fluxes 126 days into the experiment. Sensors were emplaced to continuously measure O2, CO2, moisture content, matric potential, EC, and temperature. In response to the release, CO2 levels have increased across all sensors, from an average value of 0.1% to 0.6% 126 days after the start of the experiment, indicating the rapid onset of biodegradation. The highest CO2 values observed from samples taken in the gas ports were 2.5%. Average O2 concentrations have decreased from 21% to 17% 126 days after the start of the experiment. O2 levels in the bottom central region of the sandtank declined to approximately 12%.

  1. Kinetic MHD simulation of large 'circ; tearing mode

    NASA Astrophysics Data System (ADS)

    Cheng, Jianhua; Chen, Yang; Parker, Scott; Uzdensky, Dmitri

    2012-03-01

    We have developed a second-order accurate semi-implicit δ method for kinetic MHD simulation with Lorentz force ions and fluid electrons. The model has been used to study the resistive tearing mode instability, which involves multiple spatial scales. In small 'circ; cases, the linear growth rate and eigenmode structure are consistent with resistive MHD analysis. The Rutherford stage and saturation are demonstrated, but the simulation exhibits different saturation island widths compared with previous MHD simulations. In large 'circ; cases, nonlinear simulations show multiple islands forming, followed by the islands coalescing at later times. The competition between these two processes strongly influences the reconnection rates and eventually leads to a steady state reconnection. We will present various parameter studies and show that our hybrid results agree with fluid analysis in certain limits (e.g., relatively large resisitivities).

  2. New material model for simulating large impacts on rocky bodies

    NASA Astrophysics Data System (ADS)

    Tonge, A.; Barnouin, O.; Ramesh, K.

    2014-07-01

    Large impact craters on an asteroid can provide insights into its internal structure. These craters can expose material from the interior of the body at the impact site [e.g., 1]; additionally, the impact sends stress waves throughout the body, which interrogate the asteroid's interior. Through a complex interplay of processes, such impacts can result in a variety of motions, the consequence of which may appear as lineaments that are exposed over all or portions of the asteroid's surface [e.g., 2,3]. While analytic, scaling, and heuristic arguments can provide some insight into general phenomena on asteroids, interpreting the results of a specific impact event, or series of events, on a specific asteroid geometry generally necessitates the use of computational approaches that can solve for the stress and displacement history resulting from an impact event. These computational approaches require a constitutive model for the material, which relates the deformation history of a small material volume to the average force on the boundary of that material volume. In this work, we present a new material model that is suitable for simulating the failure of rocky materials during impact events. This material model is similar to the model discussed in [4]. The new material model incorporates dynamic sub-scale crack interactions through a micro-mechanics-based damage model, thermodynamic effects through the use of a Mie-Gruneisen equation of state, and granular flow of the fully damaged material. The granular flow model includes dilatation resulting from the mutual interaction of small fragments of material (grains) as they are forced to slide and roll over each other and includes a P-α type porosity model to account for compaction of the granular material in a subsequent impact event. The micro-mechanics-based damage model provides a direct connection between the flaw (crack) distribution in the material and the rate-dependent strength. By connecting the rate

  3. Mathematical simulation of power conditioning systems. Volume 1: Simulation of elementary units. Report on simulation methodology

    NASA Technical Reports Server (NTRS)

    Prajous, R.; Mazankine, J.; Ippolito, J. C.

    1978-01-01

    Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.

  4. Controlled multibody dynamics simulation for large space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Wu, S. C.; Chang, C. W.

    1989-01-01

    Multibody dynamics discipline, and dynamic simulation in control structure interaction (CSI) design are discussed. The use, capabilities, and architecture of the Large Angle Transient Dynamics (LATDYN) code as a simulation tool are explained. A generic joint body with various types of hinge connections; finite element and element coordinate systems; results of a flexible beam spin-up on a plane; mini-mast deployment; space crane and robotic slewing manipulations; a potential CSI test article; and multibody benchmark experiments are also described.

  5. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  6. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  7. Applications of large eddy simulation methods to gyrokinetic turbulence

    SciTech Connect

    Bañón Navarro, A. Happel, T.; Teaca, B. [Applied Mathematics Research Centre, Coventry University, Coventry CV1 5FB; Max-Planck für Sonnensystemforschung, Max-Planck-Str. 2, D-37191 Katlenburg-Lindau; Max-Planck Jenko, F. [Max-Planck-Institut für Plasmaphysik, EURATOM Association, D-85748 Garching; Max-Planck Hammett, G. W. [Max-Planck Collaboration: ASDEX Upgrade Team

    2014-03-15

    The large eddy simulation (LES) approach—solving numerically the large scales of a turbulent system and accounting for the small-scale influence through a model—is applied to nonlinear gyrokinetic systems that are driven by a number of different microinstabilities. Comparisons between modeled, lower resolution, and higher resolution simulations are performed for an experimental measurable quantity, the electron density fluctuation spectrum. Moreover, the validation and applicability of LES is demonstrated through a series of diagnostics based on the free energetics of the system.

  8. Modeling and Dynamic Simulation of a Large Scale Helium Refrigerator

    NASA Astrophysics Data System (ADS)

    Lv, C.; Qiu, T. N.; Wu, J. H.; Xie, X. J.; Li, Q.

    In order to simulate the transient behaviors of a newly developed 2 kW helium refrigerator, a numerical model of the critical equipment including a screw compressor with variable-frequency drive, plate-fin heat exchangers, a turbine expander, and pneumatic valves wasdeveloped. In the simulation,the calculation of the helium thermodynamic properties arebased on 32-parameter modified Benedict-Webb-Rubin (MBWR) state equation.The start-up process of the warm compressor station with gas management subsystem, and the cool-down process of cold box in an actual operation, were dynamically simulated. The developed model was verified by comparing the simulated results with the experimental data.Besides, system responses of increasing heat load were simulated. This model can also be used to design and optimize other large scale helium refrigerators.

  9. Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise

    NASA Astrophysics Data System (ADS)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2014-11-01

    Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.

  10. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  11. Large Eddy Simulation of Multiple Turbulent Round Jets

    NASA Astrophysics Data System (ADS)

    Balajee, G. K.; Panchapakesan, Nagangudy

    2015-11-01

    Turbulent round jet flow was simulated as a large eddy simulation with OpenFoam software package for a jet Reynolds number of 11000. The intensity of the fluctuating motion in the incoming nozzle flow was adjusted so that the initial shear layer development compares well with available experimental data. The far field development of averages of higher order moments up to fourth order were compared with experiments. The agreement is good indicating that the large eddy motions were being computed satisfactorily by the simulation. Turbulent kinetic energy budget as well as the quality of the LES simulations were also evaluated. These conditions were then used to perform a multiple turbulent round jets simulation with the same initial momentum flux. The far field of the flow was compared with the single jet simulation and experiments to test approach to self similarity. The evolution of the higher order moments in the development region where the multiple jets interact were studied. We will also present FTLE fields computed from the simulation to educe structures and compare it with those educed by other scalar measures. Support of AR&DB CIFAAR, and VIRGO cluster at IIT Madras is gratefully acknowledged.

  12. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  13. Science and engineering of large scale socio-technical simulations.

    SciTech Connect

    Barrett, C. L.; Eubank, S. G.; Marathe, M. V.; Mortveit, H. S.; Reidys, C. M.

    2001-01-01

    Computer simulation is a computational approach whereby global system properties are produced as dynamics by direct computation of interactions among representations of local system elements. A mathematical theory of simulation consists of an account of the formal properties of sequential evaluation and composition of interdependent local mappings. When certain local mappings and their interdependencies can be related to particular real world objects and interdependencies, it is common to compute the interactions to derive a symbolic model of the global system made up of the corresponding interdependent objects. The formal mathematical and computational account of the simulation provides a particular kind of theoretical explanation of the global system properties and, therefore, insight into how to engineer a complex system to exhibit those properties. This paper considers the methematical foundations and engineering princaples necessary for building large scale simulations of socio-technical systems. Examples of such systems are urban regional transportation systems, the national electrical power markets and grids, the world-wide Internet, vaccine design and deployment, theater war, etc. These systems are composed of large numbers of interacting human, physical and technological components. Some components adapt and learn, exhibit perception, interpretation, reasoning, deception, cooperation and noncooperation, and have economic motives as well as the usual physical properties of interaction. The systems themselves are large and the behavior of sociotechnical systems is tremendously complex. The state of affairs f o r these kinds of systems is characterized by very little satisfactory formal theory, a good decal of very specialized knowledge of subsystems, and a dependence on experience-based practitioners' art. However, these systems are vital and require policy, control, design, implementation and investment. Thus there is motivation to improve the ability to

  14. Computational fluid dynamics simulations of particle deposition in large-scale, multigenerational lung models.

    PubMed

    Walters, D Keith; Luke, William H

    2011-01-01

    Computational fluid dynamics (CFD) has emerged as a useful tool for the prediction of airflow and particle transport within the human lung airway. Several published studies have demonstrated the use of Eulerian finite-volume CFD simulations coupled with Lagrangian particle tracking methods to determine local and regional particle deposition rates in small subsections of the bronchopulmonary tree. However, the simulation of particle transport and deposition in large-scale models encompassing more than a few generations is less common, due in part to the sheer size and complexity of the human lung airway. Highly resolved, fully coupled flowfield solution and particle tracking in the entire lung, for example, is currently an intractable problem and will remain so for the foreseeable future. This paper adopts a previously reported methodology for simulating large-scale regions of the lung airway (Walters, D. K., and Luke, W. H., 2010, "A Method for Three-Dimensional Navier-Stokes Simulations of Large-Scale Regions of the Human Lung Airway," ASME J. Fluids Eng., 132(5), p. 051101), which was shown to produce results similar to fully resolved geometries using approximate, reduced geometry models. The methodology is extended here to particle transport and deposition simulations. Lagrangian particle tracking simulations are performed in combination with Eulerian simulations of the airflow in an idealized representation of the human lung airway tree. Results using the reduced models are compared with those using the fully resolved models for an eight-generation region of the conducting zone. The agreement between fully resolved and reduced geometry simulations indicates that the new method can provide an accurate alternative for large-scale CFD simulations while potentially reducing the computational cost of these simulations by several orders of magnitude. PMID:21186893

  15. Simulation of SMC compression molding: Filling, curing, and volume changes

    SciTech Connect

    Hill, R.R. Jr.

    1992-01-01

    Sheet molding compound (SMC) is a composite material made from polyester resin, styrene, fiberglass reinforcement, and other additives. It is widely recognized that SMC is a good candidate for replacing sheet metals of automotive body exteriors because SMC is relatively inexpensive, has a high strength-to-density ratio, and has good corrosion resistance. The focus of this research was to develop computer models to simulate the important features of SMC compression molding (i.e., material flow, heat transfer, curing, material expansion, and shrinkage), and to characterize these features experimentally. A control volume/finite element approach was used to obtain the pressure and velocity fields and to compute the flow progression during compression mold filling. The energy equation and a kinetic model were solved simultaneously for the temperature and conversion profiles. A series of molding experiments was conducted to record the flow-front location and material temperature. Predictions obtained from the model were compared to experimental results which incorporated a non-isothermal temperature profile, and reasonable agreement was obtained.

  16. Plasma volume losses during simulated weightlessness in women

    SciTech Connect

    Drew, H.; Fortney, S.; La France, N.; Wagner, H.N. Jr.

    1985-05-01

    Six healthy women not using oral contraceptives underwent two 11-day intervals of complete bedrest (BR) with the BR periods separated by 4 weeks of ambulatory control. Change in plasma volume (PV) was monitored during BR to test the hypothesis that these women would show a smaller decrease in PV than PV values reported in similarly stressed men due to the water retaining effects of the female hormones. Bedrest periods were timed to coincide with opposing stages of the menstrual cycle in each woman. The menstrual cycle was divided into 4 separate stages; early follicular, ovulatory, early luteal, and late luteal phases. The percent decrease of PV showed a consistent decrease for each who began BR while in stage 1, 3 or 4 of the menstrual cycle. However, the females who began in stage 2 showed a transient attenuation in PV loss. Overall, PV changes seen in women during BR were similar to those reported for men. The water-retaining effects of menstrual hormones were evident only during the high estrogen ovulatory stage. The authors conclude the protective effects of menstrual hormones on PV losses during simulated weightless conditions appear to be only small and transient.

  17. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  18. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, M.; Wieseman, C. D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  19. Toward the large-eddy simulation of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Speziale, C. G.; Zang, T. A.

    1990-01-01

    New subgrid-scale models for the large-eddy simulation of compressible turbulent flows are developed and tested based on the Favre-filtered equations of motion for an ideal gas. A compressible generalization of the linear combination of the Smagorinsky model and scale-similarity model, in terms of Favre-filtered fields, is obtained for the subgrid-scale stress tensor. An analogous thermal linear combination model is also developed for the subgrid-scale heat flux vector. The two dimensionless constants associated with these subgrid-scale models are obtained by correlating with the results of direct numerical simulations of compressible isotropic turbulence performed on a 96(exp 3) grid using Fourier collocation methods. Extensive comparisons between the direct and modeled subgrid-scale fields are provided in order to validate the models. A large-eddy simulation of the decay of compressible isotropic turbulence (conducted on a coarse 32(exp 3) grid) is shown to yield results that are in excellent agreement with the fine grid direct simulation. Future applications of these compressible subgrid-scale models to the large-eddy simulation of more complex supersonic flows are discussed briefly.

  20. NASA's Large-Eddy Simulation Research for Jet Noise Applications

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2009-01-01

    Research into large-eddy simulation (LES) for application to jet noise is described. The LES efforts include in-house code development and application at NASA Glenn along with NASA Research Announcement sponsored work at Stanford University and Florida State University. Details of the computational methods used and sample results for jet flows are provided.

  1. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  2. Center-stabilized Yang-Mills Theory:Confinement and Large N Volume Independence

    SciTech Connect

    Unsal, Mithat; Yaffe, Laurence G.; /Washington U., Seattle

    2008-03-21

    We examine a double trace deformation of SU(N) Yang-Mills theory which, for large N and large volume, is equivalent to unmodified Yang-Mills theory up to O(1/N{sup 2}) corrections. In contrast to the unmodified theory, large N volume independence is valid in the deformed theory down to arbitrarily small volumes. The double trace deformation prevents the spontaneous breaking of center symmetry which would otherwise disrupt large N volume independence in small volumes. For small values of N, if the theory is formulated on R{sup 3} x S{sup 1} with a sufficiently small compactification size L, then an analytic treatment of the non-perturbative dynamics of the deformed theory is possible. In this regime, we show that the deformed Yang-Mills theory has a mass gap and exhibits linear confinement. Increasing the circumference L or number of colors N decreases the separation of scales on which the analytic treatment relies. However, there are no order parameters which distinguish the small and large radius regimes. Consequently, for small N the deformed theory provides a novel example of a locally four-dimensional pure gauge theory in which one has analytic control over confinement, while for large N it provides a simple fully reduced model for Yang-Mills theory. The construction is easily generalized to QCD and other QCD-like theories.

  3. Effect of Bra Use during Radiotherapy for Large-Breasted Women: Acute Toxicity and Treated Heart and Lung Volumes

    PubMed Central

    Keller, Lanea; Cohen, Randi; Sopka, Dennis M; Li, Tianyu; Li, Linna; Anderson, Penny R; Fowble, Barbara L.; Freedman, Gary M

    2012-01-01

    Purpose Large breast size presents special problems during radiation simulation, planning and patient treatment, including increased skin toxicity, in women undergoing breast-conserving surgery and radiotherapy (BCT). We report our experience using a bra during radiation in large-breasted women and its effect on acute toxicity and heart and lung dosimetry. Materials and methods From 2001 to 2006, 246 consecutive large-breasted women (bra size ≥ 38 and/or ≥ D cup) were treated with BCT using either 3D conformal (3D-CRT) or Intensity Modulated Radiation (IMRT). In 58 cases, at the physicians’ discretion, a custom-fit bra was used during simulation and treatment. Endpoints were acute radiation dermatitis, and dosimetric comparison of heart and lung volumes in a subgroup of 12 left-sided breast cancer patients planned with and without a bra. Results The majority of acute skin toxicities were grade 2 and were experienced by 90% of patients in a bra compared to 70% of patients not in a bra (p=0.003). On multivariate analysis significant predictors of grade 2/3 skin toxicity included 3D-CRT instead of IMRT (OR=3.9, 95% CI:1.8-8.5) and the use of a bra (OR=5.5, 95% CI:1.6-18.8). For left-sided patients, use of a bra was associated with a volume of heart in the treatment fields decreased by 63.4% (p=0.002), a volume of left lung decreased by 18.5% (p=0.25), and chest wall separation decreased by a mean of 1 cm (p=0.03). Conclusions The use of a bra to augment breast shape and position in large-breasted women is an alternative to prone positioning and associated with reduced chest wall separation and reduced heart volume within the treatment field. PMID:23459714

  4. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  5. Necessary conditions on Calabi-Yau manifolds for large volume vacua

    NASA Astrophysics Data System (ADS)

    Gray, James; He, Yang-Hui; Jejjala, Vishnu; Jurke, Benjamin; Nelson, Brent; Simón, Joan

    2012-11-01

    We describe an efficient, construction independent, algorithmic test to determine whether Calabi-Yau threefolds admit a structure compatible with the large volume moduli stabilization scenario of type IIB superstring theory. Using the algorithm, we scan complete intersection and toric hypersurface Calabi-Yau threefolds with 2≤h1,1≤4 and deduce that 418 among 4434 manifolds have a large volume limit with a single large four-cycle. We describe major extensions to this survey, which are currently underway.

  6. Large-eddy simulation of free-surface decaying turbulence with dynamic subgrid-scale models

    NASA Astrophysics Data System (ADS)

    Salvetti, M. V.; Zang, Y.; Street, R. L.; Banerjee, S.

    1997-08-01

    This paper describes large-eddy simulations of decaying turbulence in an open channel, using different dynamic subgrade-scale models, viz. the dynamic model of Germano et al. [Phys. Fluids A 3, 1790 (1991)] (DSM), the dynamic mixed model in Zang et al. [Phys. Fluids A 5, 3186 (1993)] (DMM), and the dynamic two-parameter model of Salvetti and Banerjee [Phys. Fluids 7, 2831 (1995)] (DTM). These models are incorporated in a finite-volume solver of the Navier-Stokes equations. A direct numerical simulation of this flow conducted by Pan and Banerjee [Phys. Fluids 7, 1649 (1995)] showed that near the free surface turbulence has a quasi-two-dimensional behavior. Moreover, the quasi-two-dimensional region increases in thickness with the decay time, although the structure remains three-dimensional in the central regions of the flow. The results of the large-eddy simulations show that both the DMM and the DTM are able to reproduce the features of the decay process observed in the direct simulation and to handle the anisotropic nature of the flow. Nevertheless, the addition of the second model coefficient in the DTM improves the agreement with the direct simulation. When the DSM is used, significant discrepancies are observed between the large-eddy and the direct simulations during the decay process at the free surface.

  7. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Astrophysics Data System (ADS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-03-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an

  8. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Technical Reports Server (NTRS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-01-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an

  9. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  10. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  11. A high resolution finite volume method for efficient parallel simulation of casting processes on unstructured meshes

    SciTech Connect

    Kothe, D.B.; Turner, J.A.; Mosso, S.J.; Ferrell, R.C.

    1997-03-01

    We discuss selected aspects of a new parallel three-dimensional (3-D) computational tool for the unstructured mesh simulation of Los Alamos National Laboratory (LANL) casting processes. This tool, known as {bold Telluride}, draws upon on robust, high resolution finite volume solutions of metal alloy mass, momentum, and enthalpy conservation equations to model the filling, cooling, and solidification of LANL castings. We briefly describe the current {bold Telluride} physical models and solution methods, then detail our parallelization strategy as implemented with Fortran 90 (F90). This strategy has yielded straightforward and efficient parallelization on distributed and shared memory architectures, aided in large part by new parallel libraries {bold JTpack9O} for Krylov-subspace iterative solution methods and {bold PGSLib} for efficient gather/scatter operations. We illustrate our methodology and current capabilities with source code examples and parallel efficiency results for a LANL casting simulation.

  12. Evaluation of Cloud, Grid and HPC resources for big volume and variety of RCM simulations

    NASA Astrophysics Data System (ADS)

    Blanco, Carlos; Cofino, Antonio S.; Fernández, Valvanuz; Fernández, Jesús

    2016-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Regional Climate Model (RCM) community. These paradigms are modifying the way how RCM applications are being executed. By using these technologies the number, variety and complexity of experiments and resources used by RCMs simulations are increasing substantially. But, although computational capacity is increasing, traditional apps and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to execute RCMs in Grid, Cloud and HPC resources and how to tackle them. For this purpose, WRF model will be used as well known representative application for RCM simulations. Grid and Cloud infrastructures provided by EGI's VOs (esr, earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. And as a solution to those challenges we will use the WRF4G framework, which provides a good framework to manage big volume and variety of computing resources for climate simulation experiments. This work is partially funded by "Programa de Personal Investigador en Formación Predoctoral" from Universidad de Cantabria, co-funded by the Regional Government of Cantabria.

  13. Domain nesting for multi-scale large eddy simulation

    NASA Astrophysics Data System (ADS)

    Fuka, Vladimir; Xie, Zheng-Tong

    2016-04-01

    The need to simulate city scale areas (O(10 km)) with high resolution within street canyons in certain areas of interests necessitates different grid resolutions in different part of the simulated area. General purpose computational fluid dynamics codes typically employ unstructured refined grids while mesoscale meteorological models more often employ nesting of computational domains. ELMM is a large eddy simulation model for the atmospheric boundary layer. It employs orthogonal uniform grids and for this reason domain nesting was chosen as the approach for simulations in multiple scales. Domains are implemented as sets of MPI processes which communicate with each other as in a normal non-nested run, but also with processes from another (outer/inner) domain. It should stressed that the duration of solution of time-steps in the outer and in the inner domain must be synchronized, so that the processes do not have to wait for the completion of their boundary conditions. This can achieved by assigning an appropriate number of CPUs to each domain, and to gain high efficiency. When nesting is applied for large eddy simulation, the inner domain receives inflow boundary conditions which lack turbulent motions not represented by the outer grid. ELMM remedies this by optional adding of turbulent fluctuations to the inflow using the efficient method of Xie and Castro (2008). The spatial scale of these fluctuations is in the subgrid-scale of the outer grid and their intensity will be estimated from the subgrid turbulent kinetic energy in the outer grid.

  14. Finecasting for renewable energy with large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Jonker, Harmen; Verzijlbergh, Remco

    2016-04-01

    We present results of a single, continuous Large-Eddy Simulation of actual weather conditions during the timespan of a full year, made possible through recent computational developments (Schalkwijk et al, MWR, 2015). The simulation is coupled to a regional weather model in order to provide an LES dataset that is representative of the daily weather of the year 2012 around Cabauw, the Netherlands. This location is chosen such that LES results can be compared with both the regional weather model and observations from the Cabauw observational supersite. The run was made possible by porting our Large-Eddy Simulation program to run completely on the GPU (Schalkwijk et al, BAMS, 2012). GPU adaptation allows us to reach much improved time-to-solution ratios (i.e. simulation speedup versus real time). As a result, one can perform runs with a much longer timespan than previously feasible. The dataset resulting from the LES run provides many avenues for further study. First, it can provide a more statistical approach to boundary-layer turbulence than the more common case-studies by simulating a diverse but representative set of situations, as well as the transition between situations. This has advantages in designing and evaluating parameterizations. In addition, we discuss the opportunities of high-resolution forecasts for the renewable energy sector, e.g. wind and solar energy production.

  15. Publicly Releasing a Large Simulation Dataset with NDS Labs

    NASA Astrophysics Data System (ADS)

    Goldbaum, Nathan

    2016-03-01

    Optimally, all publicly funded research should be accompanied by the tools, code, and data necessary to fully reproduce the analysis performed in journal articles describing the research. This ideal can be difficult to attain, particularly when dealing with large (>10 TB) simulation datasets. In this lightning talk, we describe the process of publicly releasing a large simulation dataset to accompany the submission of a journal article. The simulation was performed using Enzo, an open source, community-developed N-body/hydrodynamics code and was analyzed using a wide range of community- developed tools in the scientific Python ecosystem. Although the simulation was performed and analyzed using an ecosystem of sustainably developed tools, we enable sustainable science using our data by making it publicly available. Combining the data release with the NDS Labs infrastructure allows a substantial amount of added value, including web-based access to analysis and visualization using the yt analysis package through an IPython notebook interface. In addition, we are able to accompany the paper submission to the arXiv preprint server with links to the raw simulation data as well as interactive real-time data visualizations that readers can explore on their own or share with colleagues during journal club discussions. It is our hope that the value added by these services will substantially increase the impact and readership of the paper.

  16. Toward large eddy simulation of turbulent flow over an airfoil

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon

    1993-01-01

    The flow field over an airfoil contains several distinct flow characteristics, e.g. laminar, transitional, turbulent boundary layer flow, flow separation, unstable free shear layers, and a wake. This diversity of flow regimes taxes the presently available Reynolds averaged turbulence models. Such models are generally tuned to predict a particular flow regime, and adjustments are necessary for the prediction of a different flow regime. Similar difficulties are likely to emerge when the large eddy simulation technique is applied with the widely used Smagorinsky model. This model has not been successful in correctly representing different turbulent flow fields with a single universal constant and has an incorrect near-wall behavior. Germano et al. (1991) and Ghosal, Lund & Moin have developed a new subgrid-scale model, the dynamic model, which is very promising in alleviating many of the persistent inadequacies of the Smagorinsky model: the model coefficient is computed dynamically as the calculation progresses rather than input a priori. The model has been remarkably successful in prediction of several turbulent and transitional flows. We plan to simulate turbulent flow over a '2D' airfoil using the large eddy simulation technique. Our primary objective is to assess the performance of the newly developed dynamic subgrid-scale model for computation of complex flows about aircraft components and to compare the results with those obtained using the Reynolds average approach and experiments. The present computation represents the first application of large eddy simulation to a flow of aeronautical interest and a key demonstration of the capabilities of the large eddy simulation technique.

  17. Endoclips vs large or small-volume epinephrine in peptic ulcer recurrent bleeding

    PubMed Central

    Ljubicic, Neven; Budimir, Ivan; Biscanin, Alen; Nikolic, Marko; Supanc, Vladimir; Hrabar, Davor; Pavic, Tajana

    2012-01-01

    AIM: To compare the recurrent bleeding after endoscopic injection of different epinephrine volumes with hemoclips in patients with bleeding peptic ulcer. METHODS: Between January 2005 and December 2009, 150 patients with gastric or duodenal bleeding ulcer with major stigmata of hemorrhage and nonbleeding visible vessel in an ulcer bed (Forrest IIa) were included in the study. Patients were randomized to receive a small-volume epinephrine group (15 to 25 mL injection group; Group 1, n = 50), a large-volume epinephrine group (30 to 40 mL injection group; Group 2, n = 50) and a hemoclip group (Group 3, n = 50). The rate of recurrent bleeding, as the primary outcome, was compared between the groups of patients included in the study. Secondary outcomes compared between the groups were primary hemostasis rate, permanent hemostasis, need for emergency surgery, 30 d mortality, bleeding-related deaths, length of hospital stay and transfusion requirements. RESULTS: Initial hemostasis was obtained in all patients. The rate of early recurrent bleeding was 30% (15/50) in the small-volume epinephrine group (Group 1) and 16% (8/50) in the large-volume epinephrine group (Group 2) (P = 0.09). The rate of recurrent bleeding was 4% (2/50) in the hemoclip group (Group 3); the difference was statistically significant with regard to patients treated with either small-volume or large-volume epinephrine solution (P = 0.0005 and P = 0.045, respectively). Duration of hospital stay was significantly shorter among patients treated with hemoclips than among patients treated with epinephrine whereas there were no differences in transfusion requirement or even 30 d mortality between the groups. CONCLUSION: Endoclip is superior to both small and large volume injection of epinephrine in the prevention of recurrent bleeding in patients with peptic ulcer. PMID:22611315

  18. Commonalities and Contrasts in Location, Morphology and Emplacement of Large-volume Evolved Lava Flows

    NASA Astrophysics Data System (ADS)

    Domagall, A. S.; Gregg, T. K.

    2008-12-01

    Observations of active dacite domes and evolved (SiO2 wt.% >65) plinian-style eruptions are considered to reveal typical behaviors of Si-rich volcanic systems. However, despite lack of mention in modern volcanology textbooks, large-volume (>4 km3) evolved lava flows exist globally. These large- volume evolved lava flows have many characteristics in common regardless of location and precise tectonic setting: they are associated with other large-volume deposits (both lava flow units and ignimbrites); are commonly found with large silicic systems; regionally, they are associated with bimodal volcanism and eruption of these large-volume evolved flows does not generate a caldera. Large-volume evolved lava flows have low aspect ratios, tend to be uniform in thickness from the vent to the distal margins and abruptly decrease in thickness at the flow front where they may form enormous pahoehoe-like lobes. A lack of pyroclastic textures such as bubble wall shards, pumice fragments, broken phenocrysts and lithics is taken as evidence for their lava flow origin rather than an ignimbrite origin despite their high SiO2 contents. Presence of a pervasive basal breccia and lobate distal margins also suggest a lava flow emplacement origin, that only the most intensely rheomorphic ignimbrite could potentially mimic. Our own studies and those from the literature suggest high eruption temperatures and peralkaline chemistries may be responsible for producing unusually low viscosities to account for large lateral extents; emplacement via fissure vents and insulations of the flow may also be key in attaining great volumes.

  19. Refurbishment of the Jet Propulsion Laboratory's Large Space Simulator

    NASA Technical Reports Server (NTRS)

    Harrell, J.; Johnson, K.

    1993-01-01

    The JPL large space simulator has recently undergone a major refurbishment to restore and enhance its capabilities to provide high fidelity space simulation. The nearly completed refurbishment has included upgrading the vacuum pumping system by replacing old oil diffusion pumps with new cryogenic and turbomolecular pumps; modernizing the entire control system to utilize computerized, distributed control technology; replacing the Xenon arc lamp power supplies with new upgraded units; refinishing the primary collimating mirror; and replacing the existing integrating lens unit and the fused quartz penetration window.

  20. Large-eddy simulation of trans- and supercritical injection

    NASA Astrophysics Data System (ADS)

    Müller, H.; Niedermeier, C. A.; Jarczyk, M.; Pfitzner, M.; Hickel, S.; Adams, N. A.

    2016-07-01

    In a joint effort to develop a robust numerical tool for the simulation of injection, mixing, and combustion in liquid rocket engines at high pressure, a real-gas thermodynamics model has been implemented into two computational fluid dynamics (CFD) codes, the density-based INCA and a pressure-based version of OpenFOAM. As a part of the validation process, both codes have been used to perform large-eddy simulations (LES) of trans- and supercritical nitrogen injection. Despite the different code architecture and the different subgrid scale turbulence modeling strategy, both codes yield similar results. The agreement with the available experimental data is good.

  1. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  2. A large-volume microwave plasma source based on parallel rectangular waveguides at low pressures

    NASA Astrophysics Data System (ADS)

    Zhang, Qing; Zhang, Guixin; Wang, Shumin; Wang, Liming

    2011-02-01

    A large-volume microwave plasma with good stability, uniformity and high density is directly generated and sustained. A microwave cavity is assembled by upper and lower metal plates and two adjacently parallel rectangular waveguides with axial slots regularly positioned on their inner wide side. Microwave energy is coupled into the plasma chamber shaped by quartz glass to enclose the space of working gas at low pressures. The geometrical properties of the source and the existing modes of the electric field are determined and optimized by a numerical simulation without a plasma. The calculated field patterns are in agreement with the observed experimental results. Argon, helium, nitrogen and air are used to produce a plasma for pressures ranging from 1000 to 2000 Pa and microwave powers above 800 W. The electron density is measured with a Mach-Zehnder interferometer to be on the order of 1014 cm-3 and the electron temperature is obtained using atomic emission spectrometry to be in the range 2222-2264 K at a pressure of 2000 Pa at different microwave powers. It can be seen from the interferograms at different microwave powers that the distribution of the plasma electron density is stable and uniform.

  3. Lifetime of metastable states in a Ginzburg-Landau system: Numerical simulations at large driving forces.

    PubMed

    Umantsev, A

    2016-04-01

    We developed a "brute-force" simulation method and conducted numerical "experiments" on homogeneous nucleation in an isotropic system at large driving forces (not small supersaturations) using the stochastic Ginzburg-Landau approach. Interactions in the system are described by the asymmetric (no external field), athermal (temperature-independent driving force), tangential (simple phase diagram) Hamiltonian, which has two independent "drivers" of the phase transition: supersaturation and thermal noise. We obtained the probability distribution function of the lifetime of the metastable state and analyzed its mean value as a function of the supersaturation, noise strength, and volume. We also proved the nucleation theorem in the mean-field approximation. The results allowed us to find the thermodynamic properties of the barrier state and conclude that at large driving forces the fluctuating volumes are not independent. PMID:27176373

  4. Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach

    NASA Technical Reports Server (NTRS)

    Menon, S.; Feiz, H.

    1990-01-01

    Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.

  5. Large-eddy simulation of sand dune morphodynamics

    NASA Astrophysics Data System (ADS)

    Khosronejad, Ali; Sotiropoulos, Fotis; St. Anthony Falls Laboratory, University of Minnesota Team

    2015-11-01

    Sand dunes are natural features that form under complex interaction between turbulent flow and bed morphodynamics. We employ a fully-coupled 3D numerical model (Khosronejad and Sotiropoulos, 2014, Journal of Fluid Mechanics, 753:150-216) to perform high-resolution large-eddy simulations of turbulence and bed morphodynamics in a laboratory scale mobile-bed channel to investigate initiation, evolution and quasi-equilibrium of sand dunes (Venditti and Church, 2005, J. Geophysical Research, 110:F01009). We employ a curvilinear immersed boundary method along with convection-diffusion and bed-morphodynamics modules to simulate the suspended sediment and the bed-load transports respectively. The coupled simulation were carried out on a grid with more than 100 million grid nodes and simulated about 3 hours of physical time of dune evolution. The simulations provide the first complete description of sand dune formation and long-term evolution. The geometric characteristics of the simulated dunes are shown to be in excellent agreement with observed data obtained across a broad range of scales. This work was supported by NSF Grants EAR-0120914 (as part of the National Center for Earth-Surface Dynamics). Computational resources were provided by the University of Minnesota Supercomputing Institute.

  6. Process control of large-scale finite element simulation software

    SciTech Connect

    Spence, P.A.; Weingarten, L.I.; Schroder, K.; Tung, D.M.; Sheaffer, D.A.

    1996-02-01

    We have developed a methodology for coupling large-scale numerical codes with process control algorithms. Closed-loop simulations were demonstrated using the Sandia-developed finite element thermal code TACO and the commercially available finite element thermal-mechanical code ABAQUS. This new capability enables us to use computational simulations for designing and prototyping advanced process-control systems. By testing control algorithms on simulators before building and testing hardware, enormous time and cost savings can be realized. The need for a closed-loop simulation capability was demonstrated in a detailed design study of a rapid-thermal-processing reactor under development by CVC Products Inc. Using a thermal model of the RTP system as a surrogate for the actual hardware, we were able to generate response data needed for controller design. We then evaluated the performance of both the controller design and the hardware design by using the controller to drive the finite element model. The controlled simulations provided data on wafer temperature uniformity as a function of ramp rate, temperature sensor locations, and controller gain. This information, which is critical to reactor design, cannot be obtained from typical open-loop simulations.

  7. 3-D dynamic rupture simulations by a finite volume method

    NASA Astrophysics Data System (ADS)

    Benjemaa, M.; Glinsky-Olivier, N.; Cruz-Atienza, V. M.; Virieux, J.

    2009-07-01

    Dynamic rupture of a 3-D spontaneous crack of arbitrary shape is investigated using a finite volume (FV) approach. The full domain is decomposed in tetrahedra whereas the surface, on which the rupture takes place, is discretized with triangles that are faces of tetrahedra. First of all, the elastodynamic equations are described into a pseudo-conservative form for an easy application of the FV discretization. Explicit boundary conditions are given using criteria based on the conservation of discrete energy through the crack surface. Using a stress-threshold criterion, these conditions specify fluxes through those triangles that have suffered rupture. On these broken surfaces, stress follows a linear slip-weakening law, although other friction laws can be implemented. For The Problem Version 3 of the dynamic-rupture code verification exercise conducted by the SCEC/USGS, numerical solutions on a planar fault exhibit a very high convergence rate and are in good agreement with the reference one provided by a finite difference (FD) technique. For a non-planar fault of parabolic shape, numerical solutions agree satisfactorily well with those obtained with a semi-analytical boundary integral method in terms of shear stress amplitudes, stopping phases arrival times and stress overshoots. Differences between solutions are attributed to the low-order interpolation of the FV approach, whose results are particularly sensitive to the mesh regularity (structured/unstructured). We expect this method, which is well adapted for multiprocessor parallel computing, to be competitive with others for solving large scale dynamic ruptures scenarios of seismic sources in the near future.

  8. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    NASA Technical Reports Server (NTRS)

    Oefelein, Joseph C.; Garcia, Roberto (Technical Monitor)

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  9. Large-eddy simulation using the finite element method

    SciTech Connect

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.; Kollmann, W.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulence modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.

  10. Large Eddy Simulations of Severe Convection Induced Turbulence

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.

  11. Reconstructing a Large-Scale Population for Social Simulation

    NASA Astrophysics Data System (ADS)

    Fan, Zongchen; Meng, Rongqing; Ge, Yuanzheng; Qiu, Xiaogang

    The advent of social simulation has provided an opportunity to research on social systems. More and more researchers tend to describe the components of social systems in a more detailed level. Any simulation needs the support of population data to initialize and implement the simulation systems. However, it's impossible to get the data which provide full information about individuals and households. We propose a two-step method to reconstruct a large-scale population for a Chinese city according to Chinese culture. Firstly, a baseline population is generated through gathering individuals into households one by one; secondly, social relationships such as friendship are assigned to the baseline population. Through a case study, a population of 3,112,559 individuals gathered in 1,133,835 households is reconstructed for Urumqi city, and the results show that the generated data can respect the real data quite well. The generated data can be applied to support modeling of some social phenomenon.

  12. Microwave holography of large reflector antennas - Simulation algorithms

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1985-01-01

    The performance of large reflector antennas can be improved by identifying the location and amount of their surface distortions and correcting them. To determine the accuracy of the constructed surface profiles, simulation studies are used to incorporate both the effects of systematic and random distortions, particularly the effects of the displaced surface panels. In this paper, different simulation models are investigated, emphasizing a model based on the vector diffraction analysis of a curved reflector with displaced panels. The simulated far-field patterns are then used to reconstruct the location and amount of displacement of the surface panels by employing a fast Fourier transform/iterative procedure. The sensitivity of the microwave holography technique based on the number of far-field sampled points, level of distortions, polarizations, illumination tapers, etc., is also examined.

  13. Lightweight computational steering of very large scale molecular dynamics simulations

    SciTech Connect

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

  14. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  15. Large-volume en-bloc staining for electron microscopy-based connectomics

    PubMed Central

    Hua, Yunfeng; Laserstein, Philip; Helmstaedter, Moritz

    2015-01-01

    Large-scale connectomics requires dense staining of neuronal tissue blocks for electron microscopy (EM). Here we report a large-volume dense en-bloc EM staining protocol that overcomes the staining gradients, which so far substantially limited the reconstructable volumes in three-dimensional (3D) EM. Our protocol provides densely reconstructable tissue blocks from mouse neocortex sized at least 1 mm in diameter. By relaxing the constraints on precise topographic sample targeting, it makes the correlated functional and structural analysis of neuronal circuits realistic. PMID:26235643

  16. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, G.D.

    1998-11-24

    Microwave injection methods are disclosed for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant ``volume`` ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources. 5 figs.

  17. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, Gerald D.

    1998-01-01

    Microwave injection methods for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant "volume" ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources.

  18. A Framework for End to End Simulations of the Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Gibson, R. R.; Ahmad, Z.; Bankert, J.; Bard, D.; Connolly, A. J.; Chang, C.; Gilmore, K.; Grace, E.; Hannel, M.; Jernigan, J. G.; Jones, L.; Kahn, S. M.; Krughoff, K. S.; Lorenz, S.; Marshall, S.; Nagarajan, S.; Peterson, J. R.; Pizagno, J.; Rasmussen, A. P.; Shmakova, M.; Silvestri, N.; Todd, N.; Young, M.

    2011-07-01

    As observatories get bigger and more complicated to operate, risk mitigation techniques become increasingly important. Additionally, the size and complexity of data coming from the next generation of surveys will present enormous challenges in how we process, store, and analyze these data. End-to-end simulations of telescopes with the scope of LSST are essential to correct problems and verify science capabilities as early as possible. A simulator can also determine how defects and trade-offs in individual subsystems impact the overall design requirements. Here, we present the architecture, implementation, and results of the source simulation framework for the Large Synoptic Survey Telescope (LSST). The framework creates time-based realizations of astronomical objects and formats the output for use in many different survey contexts (i.e., image simulation, reference catalogs, calibration catalogs, and simulated science outputs). The simulations include Milky Way, cosmological, and solar system models as well as transient and variable objects. All model objects can be sampled with the LSST cadence from any operations simulator run. The result is a representative, full-sky simulation of LSST data that can be used to determine telescope performance, the feasibility of science goals, and strategies for processing LSST-scale data volumes.

  19. Accelerating large cardiac bidomain simulations by arnoldi preconditioning.

    PubMed

    Deo, Makarand; Bauer, Steffen; Plank, Gernot; Vigmond, Edward

    2006-01-01

    Bidomain simulations of cardiac systems often in volve solving large, sparse, linear systems of the form Ax=b. These simulations are computationally very expensive in terms of run time and memory requirements. Therefore, efficient solvers are essential to keep simulations tractable. In this paper, an efficient preconditioner for the conjugate gradient (CG) method based on system order reduction using the Arnoldi method (A-PCG) is explained. Large order systems generated during cardiac bidomain simulations using a finite element method formulation, are solved using the A-PCG method. Its performance is compared with incomplete LU (ILU) preconditioning. Results indicate that the A-PCG estimates an approximate solution considerably faster than the ILU, often within a single iteration. To reduce the computational demands in terms of memory and run time, the use of a cascaded preconditioner is suggested. The A-PCG can be applied to quickly obtain an approximate solution, subsequently a cheap iterative method such as successive overrelaxation (SOR) is applied to further refine the solution to arrive at a desired accuracy. The memory requirements are less than direct LU but more than ILU method. The proposed scheme is shown to yield significant speedups when solving time evolving systems. PMID:17946209

  20. Simulating subsurface heterogeneity improves large-scale water resources predictions

    NASA Astrophysics Data System (ADS)

    Hartmann, A. J.; Gleeson, T.; Wagener, T.; Wada, Y.

    2014-12-01

    Heterogeneity is abundant everywhere across the hydrosphere. It exists in the soil, the vadose zone and the groundwater. In large-scale hydrological models, subsurface heterogeneity is usually not considered. Instead average or representative values are chosen for each of the simulated grid cells, not incorporating any sub-grid variability. This may lead to unreliable predictions when the models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this study we use a novel, large-scale model that takes into account sub-grid heterogeneity for the simulation of groundwater recharge by using statistical distribution functions. We choose all regions over Europe that are comprised by carbonate rock (~35% of the total area) because the well understood dissolvability of carbonate rocks (karstification) allows for assessing the strength of subsurface heterogeneity. Applying the model with historic data and future climate projections we show that subsurface heterogeneity lowers the vulnerability of groundwater recharge on hydro-climatic extremes and future changes of climate. Comparing our simulations with the PCR-GLOBWB model we can quantify the deviations of simulations for different sub-regions in Europe.

  1. Simulation requirements for the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Soosaar, K.

    1984-01-01

    Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.

  2. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  3. Large-eddy simulation of flow past a circular cylinder

    NASA Technical Reports Server (NTRS)

    Mittal, R.

    1995-01-01

    Some of the most challenging applications of large-eddy simulation are those in complex geometries where spectral methods are of limited use. For such applications more conventional methods such as finite difference or finite element have to be used. However, it has become clear in recent years that dissipative numerical schemes which are routinely used in viscous flow simulations are not good candidates for use in LES of turbulent flows. Except in cases where the flow is extremely well resolved, it has been found that upwind schemes tend to damp out a significant portion of the small scales that can be resolved on the grid. Furthermore, it has been found that even specially designed higher-order upwind schemes that have been used successfully in the direct numerical simulation of turbulent flows produce too much dissipation when used in conjunction with large-eddy simulation. The objective of the current study is to perform a LES of incompressible flow past a circular cylinder at a Reynolds number of 3900 using a solver which employs an energy-conservative second-order central difference scheme for spatial discretization and compare the results obtained with those of Beaudan & Moin (1994) and with the experiments in order to assess the performance of the central scheme for this relatively complex geometry.

  4. Upscaling of elastic properties for large scale geomechanical simulations

    NASA Astrophysics Data System (ADS)

    Chalon, F.; Mainguy, M.; Longuemare, P.; Lemonnier, P.

    2004-09-01

    Large scale geomechanical simulations are being increasingly used to model the compaction of stress dependent reservoirs, predict the long term integrity of under-ground radioactive waste disposals, and analyse the viability of hot-dry rock geothermal sites. These large scale simulations require the definition of homogenous mechanical properties for each geomechanical cell whereas the rock properties are expected to vary at a smaller scale. Therefore, this paper proposes a new methodology that makes possible to define the equivalent mechanical properties of the geomechanical cells using the fine scale information given in the geological model. This methodology is implemented on a synthetic reservoir case and two upscaling procedures providing the effective elastic properties of the Hooke's law are tested. The first upscaling procedure is an analytical method for perfectly stratified rock mass, whereas the second procedure computes lower and upper bounds of the equivalent properties with no assumption on the small scale heterogeneity distribution. Both procedures are applied to one geomechanical cell extracted from the reservoir structure. The results show that the analytical and numerical upscaling procedures provide accurate estimations of the effective parameters. Furthermore, a large scale simulation using the homogenized properties of each geomechanical cell calculated with the analytical method demonstrates that the overall behaviour of the reservoir structure is well reproduced for two different loading cases. Copyright

  5. Toxicity Profile With a Large Prostate Volume After External Beam Radiotherapy for Localized Prostate Cancer

    SciTech Connect

    Pinkawa, Michael Fischedick, Karin; Asadpour, Branka; Gagel, Bernd; Piroth, Marc D.; Nussen, Sandra; Eble, Michael J.

    2008-01-01

    Purpose: To assess the impact of prostate volume on health-related quality of life (HRQOL) before and at different intervals after radiotherapy for prostate cancer. Methods and Materials: A group of 204 patients was surveyed prospectively before (Time A), at the last day (Time B), 2 months after (Time C), and 16 months (median) after (Time D) radiotherapy, with a validated questionnaire (Expanded Prostate Cancer Index Composite). The group was divided into subgroups with a small (11-43 cm{sup 3}) and a large (44-151 cm{sup 3}) prostate volume. Results: Patients with large prostates presented with lower urinary bother scores (median 79 vs. 89; p = 0.01) before treatment. Urinary function/bother scores for patients with large prostates decreased significantly compared to patients with small prostates due to irritative/obstructive symptoms only at Time B (pain with urination more than once daily in 48% vs. 18%; p < 0.01). Health-related quality of life did not differ significantly between both patient groups at Times C and D. In contrast to a large prostate, a small initial bladder volume (with associated higher dose-volume load) was predictive for lower urinary bother scores both in the acute and late phase; at Time B it predisposed for pollakiuria but not for pain. Patients with neoadjuvant hormonal therapy reached significantly lower HRQOL scores in several domains (affecting only incontinence in the urinary domain), despite a smaller prostate volume (34 cm{sup 3} vs. 47 cm{sup 3}; p < 0.01). Conclusions: Patients with a large prostate volume have a great risk of irritative/obstructive symptoms (particularly dysuria) in the acute radiotherapy phase. These symptoms recover rapidly and do not influence long-term HRQOL.

  6. Large Eddy Simulations using Lattice Boltzmann algorithms. Final report

    SciTech Connect

    Serling, J.D.

    1993-09-28

    This report contains the results of a study performed to implement eddy-viscosity models for Large-Eddy-Simulations (LES) into Lattice Boltzmann (LB) algorithms for simulating fluid flows. This implementation requires modification of the LB method of simulating the incompressible Navier-Stokes equations to allow simulation of the filtered Navier-Stokes equations with some subgrid model for the Reynolds stress term. We demonstrate that the LB method can indeed be used for LES by simply locally adjusting the value of the BGK relaxation time to obtain the desired eddy-viscosity. Thus, many forms of eddy-viscosity models including the standard Smagorinsky model or the Dynamic model may be implemented using LB algorithms. Since underresolved LB simulations often lead to instability, the LES model actually serves to stabilize the method. An alternative method of ensuring stability is presented which requires that entropy increase during the collision step of the LB method. Thus, an alternative collision operator is locally applied if the entropy becomes too low. This stable LB method then acts as an LES scheme that effectively introduces its own eddy viscosity to damp short wavelength oscillations.

  7. Large eddy simulation and its implementation in the COMMIX code.

    SciTech Connect

    Sun, J.; Yu, D.-H.

    1999-02-15

    Large eddy simulation (LES) is a numerical simulation method for turbulent flows and is derived by spatial averaging of the Navier-Stokes equations. In contrast with the Reynolds-averaged Navier-Stokes equations (RANS) method, LES is capable of calculating transient turbulent flows with greater accuracy. Application of LES to differing flows has given very encouraging results, as reported in the literature. In recent years, a dynamic LES model that presented even better results was proposed and applied to several flows. This report reviews the LES method and its implementation in the COMMIX code, which was developed at Argonne National Laboratory. As an example of the application of LES, the flow around a square prism is simulated, and some numerical results are presented. These results include a three-dimensional simulation that uses a code developed by one of the authors at the University of Notre Dame, and a two-dimensional simulation that uses the COMMIX code. The numerical results are compared with experimental data from the literature and are found to be in very good agreement.

  8. Towards Large Eddy Simulation of gas turbine compressors

    NASA Astrophysics Data System (ADS)

    McMullan, W. A.; Page, G. J.

    2012-07-01

    With increasing computing power, Large Eddy Simulation could be a useful simulation tool for gas turbine axial compressor design. This paper outlines a series of simulations performed on compressor geometries, ranging from a Controlled Diffusion Cascade stator blade to the periodic sector of a stage in a 3.5 stage axial compressor. The simulation results show that LES may offer advantages over traditional RANS methods when off-design conditions are considered - flow regimes where RANS models often fail to converge. The time-dependent nature of LES permits the resolution of transient flow structures, and can elucidate new mechanisms of vorticity generation on blade surfaces. It is shown that accurate LES is heavily reliant on both the near-wall mesh fidelity and the ability of the imposed inflow condition to recreate the conditions found in the reference experiment. For components embedded in a compressor this requires the generation of turbulence fluctuations at the inlet plane. A recycling method is developed that improves the quality of the flow in a single stage calculation of an axial compressor, and indicates that future developments in both the recycling technique and computing power will bring simulations of axial compressors within reach of industry in the coming years.

  9. Large Dynamic Range Simulations of Galaxies Hosting Supermassive Black Holes

    NASA Astrophysics Data System (ADS)

    Levine, Robyn

    2011-08-01

    The co-evolution of supermassive black holes (SMBHs) and their host galaxies is a rich problem, spanning a large-dynamic range and depending on many physical processes. Simulating the transport of gas and angular momentum from super-galactic scales all the way down to the outer edge of the black hole's accretion disk requires sophisticated numerical techniques with extensive treatment of baryonic physics. We use a hydrodynamic adaptive mesh refinement simulation to follow the growth and evolution of a typical disk galaxy hosting an SMBH, in a cosmological context (covering a dynamical range of 10 million!). We have adopted a piecemeal approach, focusing our attention on the gas dynamics in the central few hundred parsecs of the simulated galaxy (with boundary conditions provided by the larger cosmological simulation), and beginning with a simplified picture (no mergers or feedback). In this scenario, we find that the circumnuclear disk remains marginally stable against catastrophic fragmentation, allowing stochastic fueling of gas into the vicinity of the SMBH. I will discuss the successes and the limitations of these simulations, and their future direction.

  10. Large Surveys & Simulations -- The Real and the Virtual Universe

    NASA Astrophysics Data System (ADS)

    von Berlepsch, Regina

    2012-06-01

    The current issue of AN is Volume 24 of the Reviews in Modern Astronomy and presents selected papers given at the International Scientific Conference of the Society on ``Surveys & Simulations -- The Real and the Virtual Universe'' held in Heidelberg, Germany, September 19-23, 2011. The ``Astronomische Gesellschaft'' was actually founded in Heidelberg in 1863 and there has been a close connection between the AG and the colleagues in Heidelberg ever since. It was the sixth time that Heidelberg hosted a meeting of the AG. In 2011 the meeting took place at the Ruprecht-Karls-University, Germany's oldest university, celebrating its 625th anniversary. The meeting was attended by more than 400 participants from around the world.

  11. Rapid estimate of solid volume in large tuff cores using a gas pycnometer

    SciTech Connect

    Thies, C.; Geddis, A.M.; Guzman, A.G.

    1996-09-01

    A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1{degrees}C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm{sup 3} with less than 1% error.

  12. Optimization of the electric field distribution in a large volume tissue-equivalent proportional counter.

    PubMed

    Verma, P K; Waker, A J

    1992-10-01

    Large volume tissue-equivalent proportional counters are of interest in radiation protection metrology, as the sensitivity in terms of counts per unit absorbed dose in these devices increases as the square of the counter diameter. Conventional solutions to the problem of maintaining a uniform electric field within a counter result in sensitive volume to total volume ratios which are unacceptably low when counter dimensions of the order of 15 cm diameter are considered and when overall compactness is an important design criterion. This work describes the design and optimization of an arrangement of field discs set at different potentials which enable sensitive volume to total volume ratios to approach unity. The method has been used to construct a 12.7 cm diameter right-cylindrical tissue-equivalent proportional counter in which the sensitive volume accounts for over 95% of the total device volume and the gas gain uniformity is maintained to within 3% along the entire length of the anode wire. PMID:1438550

  13. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  14. Exposing earth surface process model simulations to a large audience

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical

  15. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  16. Mechanically Cooled Large-Volume Germanium Detector Systems for Nuclear Explosion Monitoring DOENA27323-1

    SciTech Connect

    Hull, E.L.

    2006-07-28

    Compact maintenance free mechanical cooling systems are being developed to operate large volume germanium detectors for field applications. To accomplish this we are utilizing a newly available generation of Stirling-cycle mechanical coolers to operate the very largest volume germanium detectors with no maintenance. The user will be able to leave these systems unplugged on the shelf until needed. The flip of a switch will bring a system to life in ~ 1 hour for measurements. The maintenance-free operating lifetime of these detector systems will exceed 5 years. These features are necessary for remote long-duration liquid-nitrogen free deployment of large-volume germanium gamma-ray detector systems for Nuclear Explosion Monitoring. The Radionuclide Aerosol Sampler/Analyzer (RASA) will greatly benefit from the availability of such detectors by eliminating the need for liquid nitrogen at RASA sites while still allowing the very largest available germanium detectors to be reliably utilized.

  17. A system for the disposal of large volumes of air containing oxygen-15

    NASA Astrophysics Data System (ADS)

    Peters, J. M.; Quaglia, L.; del Fiore, G.; Hannay, J.; Fissore, A.

    1991-01-01

    A method is described which permits large volumes of air containing the radionuclide 15O to be vented into the atmosphere. The short half-life of this isotope (124 s) enables use to be made of a large number of small vessels connected in series. Such a device has the effect of increasing the mean transit time. The system as installed results in a reduction of the radioactive concentration in the vented air to levels below the maximum permitted values.

  18. Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modeling

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-02-01

    Measuring the volume weighted velocity power spectrum suffers from a severe systematic error due to imperfect sampling of the velocity field from the inhomogeneous distribution of dark matter particles/halos in simulations or galaxies with velocity measurement. This "sampling artifact" depends on both the mean particle number density n¯P and the intrinsic large scale structure (LSS) fluctuation in the particle distribution. (1) We report robust detection of this sampling artifact in N -body simulations. It causes ˜12 % underestimation of the velocity power spectrum at k =0.1 h /Mpc for samples with n¯ P=6 ×10-3 (Mpc /h )-3 . This systematic underestimation increases with decreasing n¯P and increasing k . Its dependence on the intrinsic LSS fluctuations is also robustly detected. (2) All of these findings are expected based upon our theoretical modeling in paper I [P. Zhang, Y. Zheng, and Y. Jing, Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling, arXiv:1405.7125.]. In particular, the leading order theoretical approximation agrees quantitatively well with the simulation result for n¯ P≳6 ×10-4 (Mpc /h )-3 . Furthermore, we provide an ansatz to take high order terms into account. It improves the model accuracy to ≲1 % at k ≲0.1 h /Mpc over 3 orders of magnitude in n¯P and over typical LSS clustering from z =0 to z =2 . (3) The sampling artifact is determined by the deflection D field, which is straightforwardly available in both simulations and data of galaxy velocity. Hence the sampling artifact in the velocity power spectrum measurement can be self-calibrated within our framework. By applying such self-calibration in simulations, it is promising to determine the real large scale velocity bias of 1013M⊙ halos with ˜1 % accuracy, and that of lower mass halos with better accuracy. (4) In contrast to suppressing the velocity power spectrum at large scale, the sampling artifact causes an overestimation of the velocity

  19. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    SciTech Connect

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-07-28

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  20. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    PubMed Central

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-01-01

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys.141, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys.141, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent. PMID:25084887

  1. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-07-01

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2-3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  2. Perspective volume rendering of cross-sectional images for simulated endoscopy and intraparenchymal viewing

    NASA Astrophysics Data System (ADS)

    Napel, Sandy; Rubin, Geoffrey D.; Beaulieu, Christopher F.; Jeffrey, R. Brooke, Jr.; Argiro, Vincent

    1996-04-01

    The capability of today's clinical scanners to create large quantities of high resolution and near isotropically sampled volume data, coupled with a rapidly improving performance/price ratio of computers, has created the challenge and feasibility of creating new ways to explore cross- sectional medical imagery. Perspective volume rendering (PVR) allows an observer to 'fly- through' image data and view its contents from within for diagnostic and treatment planning purposes. We simulated flights through 14 data sets and, where possible, these were compared to conventional endoscopy. We demonstrated colonic masses and polyps as small as 5 mm, tracheal obstructions and precise positioning of endoluminal stent-grafts. Simulated endoscopy was capable of generating views not possible with conventional endoscopy due to its restrictions on camera location and orientation. Interactive adjustment of tissue opacities permitted views beyond the interior of lumina to reveal other structures such as masses, thrombus, and calcifications. We conclude that PVR is an exciting new technique with the potential to supplement and/or replace some conventional diagnostic imaging procedures. It has further utility for treatment planning and communication with colleagues, and the potential to reduce the number of normal people who would otherwise undergo more invasive procedures without benefit.

  3. Development of a Solid Phase Extraction Method for Agricultural Pesticides in Large-Volume Water Samples

    EPA Science Inventory

    An analytical method using solid phase extraction (SPE) and analysis by gas chromatography/mass spectrometry (GC/MS) was developed for the trace determination of a variety of agricultural pesticides and selected transformation products in large-volume high-elevation lake water sa...

  4. A New Electropositive Filter for Concentrating Enterovirus and Norovirus from Large Volumes of Water - MCEARD

    EPA Science Inventory

    The detection of enteric viruses in environmental water usually requires the concentration of viruses from large volumes of water. The 1MDS electropositive filter is commonly used for concentrating enteric viruses from water but unfortunately these filters are not cost-effective...

  5. An efficient out-of-core volume ray casting method for the visualization of large medical data sets

    NASA Astrophysics Data System (ADS)

    Xue, Jian; Tian, Jie; Chen, Jian; Dai, Yakang

    2007-03-01

    Volume ray casting algorithm is widely recognized for high quality volume visualization. However, when rendering very large volume data sets, the original ray casting algorithm will lead to very inefficient random accesses in disk and make it very slowly to render the whole volume data set. In order to solve this problem, an efficient out-of-core volume ray casting method with a new out-of-core framework for processing large volume data sets based on consumer PC hardware is proposed in this paper. The new framework gives a transparent and efficient access to the volume data set cached in disk, while the new volume ray casting method minimizes the data exchange between hard disk and physical memory and performs comparatively fast high quality volume rendering. The experimental results indicate that the new method and framework are effective and efficient for the visualization of very large medical data sets.

  6. Hanford tank waste operation simulator operational waste volume projection verification and validation procedure

    SciTech Connect

    HARMSEN, R.W.

    1999-10-28

    The Hanford Tank Waste Operation Simulator is tested to determine if it can replace the FORTRAN-based Operational Waste Volume Projection computer simulation that has traditionally served to project double-shell tank utilization. Three Test Cases are used to compare the results of the two simulators; one incorporates the cleanup schedule of the Tri Party Agreement.

  7. Large-scale Molecular Dynamics Simulations of Glancing Angle Deposition

    NASA Astrophysics Data System (ADS)

    Hubartt, Bradley; Liu, Xuejing; Amar, Jacques

    2013-03-01

    While a variety of methods have been developed to carry out atomistic simulations of thin-film growth at small deposition angles with respect to the substrate normal, due to the complex morphology as well as the existence of multiple scattering of depositing atoms by the growing thin-film, realistically modeling the deposition process for large deposition angles can be quite challenging. Accordingly, we have developed a computationally efficient method based on the use of a single graphical processing unit (GPU) to carry out molecular dynamics (MD) simulations of the deposition and growth of thin-films via glancing angle deposition. Using this method we have carried out large-scale MD simulations, based on an embedded-atom-method potential, of Cu/Cu(100) growth up to 20 monolayers for deposition angles ranging from 50° to 85° and for both random and fixed azimuthal angles. Our results for the thin-film porosity, roughness, lateral correlation length, and density vs height will be presented and compared with experiments. Results for the dependence of the microstructure, grain-size distribution, surface texture, and defect concentration on deposition angle will also be presented. Supported by NSF DMR-0907399

  8. Large Eddy Simulation of Mixing within a Hypervelocity Scramjet Combustor

    NASA Astrophysics Data System (ADS)

    Petty, David; Wheatley, Vincent; Pantano, Carlos; Smart, Michael

    2013-11-01

    The turbulent mixing of parallel hypervelocity (U = 3230 m/sec, M = 3.86) air-streams with a sonic stream of gaseous hydrogen is simulated using large eddy simulation. The resultant mixing layers are characterized by a convective Mach number of 1.20. This configuration represents parallel slot injection of hydrogen via an intrusive centerbody within a constant area rectangular combustor. A hybrid shock-capturing/zero numerical dissipation (WENO/TCD) switch method designed for simulations of compressible turbulent flows was utilized. Sub-grid scale turbulence was modeled using the stretched vortex model. Visualizations of the three dimensional turbulent structures generated behind the centerbody will be presented. It has been observed that a span-wise instability of the wake behind the centerbody is initially dominant. Further downstream, the shear-layers coalesce into a mixing wake and develop the expected large-scale coherent span-wise vortices. Ph.D. Candidate, School of Mechanical and Mining Engineering, Centre for Hypersonics.

  9. Simulations of Large-Area Electron Beam Diodes

    NASA Astrophysics Data System (ADS)

    Swanekamp, S. B.; Friedman, M.; Ludeking, L.; Smithe, D.; Obenschain, S. P.

    1999-11-01

    Large area electron beam diodes are typically used to pump the amplifiers of KrF lasers. Simulations of large-area electron beam diodes using the particle-in-cell code MAGIC3D have shown the electron flow in the diode to be unstable. Since this instability can potentially produce a non-uniform current and energy distribution in the hibachi structure and lasing medium it can be detrimental to laser efficiency. These results are similar to simulations performed using the ISIS code.(M.E. Jones and V.A. Thomas, Proceedings of the 8^th) International Conference on High-Power Particle Beams, 665 (1990). We have identified the instability as the so called ``transit-time" instability(C.K. Birdsall and W.B. Bridges, Electrodynamics of Diode Regions), (Academic Press, New York, 1966).^,(T.M. Antonsen, W.H. Miner, E. Ott, and A.T. Drobot, Phys. Fluids 27), 1257 (1984). and have investigated the role of the applied magnetic field and diode geometry. Experiments are underway to characterize the instability on the Nike KrF laser system and will be compared to simulation. Also some possible ways to mitigate the instability will be presented.

  10. Large-scale lattice-Boltzmann simulations over lambda networks

    NASA Astrophysics Data System (ADS)

    Saksena, R.; Coveney, P. V.; Pinning, R.; Booth, S.

    Amphiphilic molecules are of immense industrial importance, mainly due to their tendency to align at interfaces in a solution of immiscible species, e.g., oil and water, thereby reducing surface tension. Depending on the concentration of amphiphiles in the solution, they may assemble into a variety of morphologies, such as lamellae, micelles, sponge and cubic bicontinuous structures exhibiting non-trivial rheological properties. The main objective of this work is to study the rheological properties of very large, defect-containing gyroidal systems (of up to 10243 lattice sites) using the lattice-Boltzmann method. Memory requirements for the simulation of such large lattices exceed that available to us on most supercomputers and so we use MPICH-G2/MPIg to investigate geographically distributed domain decomposition simulations across HPCx in the UK and TeraGrid in the US. Use of MPICH-G2/MPIg requires the port-forwarder to work with the grid middleware on HPCx. Data from the simulations is streamed to a high performance visualisation resource at UCL (London) for rendering and visualisation. Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project March 26-28 2007 The George Hotel, Edinburgh, UK