Science.gov

Sample records for large volume simulations

  1. Simulating cosmic reionization: how large a volume is large enough?

    NASA Astrophysics Data System (ADS)

    Iliev, Ilian T.; Mellema, Garrelt; Ahn, Kyungjin; Shapiro, Paul R.; Mao, Yi; Pen, Ue-Li

    2014-03-01

    We present the largest-volume (425 Mpc h-1 = 607 Mpc on a side) full radiative transfer simulation of cosmic reionization to date. We show that there is significant additional power in density fluctuations at very large scales. We systematically investigate the effects this additional power has on the progress, duration and features of reionization and on selected reionization observables. We find that comoving volume of ˜100 Mpc h-1 per side is sufficient for deriving a convergent mean reionization history, but that the reionization patchiness is significantly underestimated. We use jackknife splitting to quantify the convergence of reionization properties with simulation volume. We find that sub-volumes of ˜100 Mpc h-1 per side or larger yield convergent reionization histories, except for the earliest times, but smaller volumes of ˜50 Mpc h-1 or less are not well converged at any redshift. Reionization history milestones show significant scatter between the sub-volumes, as high as Δz ˜ 1 for ˜50 Mpc h-1 volumes. If we only consider mean-density sub-regions the scatter decreases, but remains at Δz ˜ 0.1-0.2 for the different size sub-volumes. Consequently, many potential reionization observables like 21-cm rms, 21-cm PDF skewness and kurtosis all show good convergence for volumes of ˜200 Mpc h-1, but retain considerable scatter for smaller volumes. In contrast, the three-dimensional 21-cm power spectra at large scales (k < 0.25 h Mpc-1) do not fully converge for any sub-volume size. These additional large-scale fluctuations significantly enhance the 21-cm fluctuations, which should improve the prospects of detection considerably, given the lower foregrounds and greater interferometer sensitivity at higher frequencies.

  2. Large Eddy Simulations of Volume Restriction Effects on Canopy-Induced Increased-Uplift Regions

    NASA Astrophysics Data System (ADS)

    Chatziefstratiou, E.; Bohrer, G.; Velissariou, V.

    2012-12-01

    ABSTRACT Previous modeling and empirical work have shown the development of important areas of increased uplift past forward-facing steps, and recirculation zones past backward-facing steps. Forests edges represent a special kind of step - a semi-porous one. Current models of the effects of forest edges on the flow represent the forest with a prescribed drag term and does not account for the effects of the solid volume in the forest that restrict the airflow. The RAMS-based Forest Large Eddy Simulation (RAFLES) resolves flows inside and above forested canopies. RAFLES is spatially explicit, and uses the finite volume method to solve a descretized set of Navier-Stokes equations. It accounts for vegetation drag effects on the flow and on the flux exchange between the canopy and the canopy air, proportional to the local leaf density. For a better representation of the vegetation structure in the numerical grid within the canopy sub-domain, the model uses a modified version of the cut cell coordinate system. The hard volume of vegetation elements, in forests, or buildings, in urban environments, within each numerical grid cell is represented via a sub-grid-scale process that shrinks the open apertures between grid cells and reduces the open cell volume. We used RAFLES to simulate the effects of a canopy of varying foliage and stem densities on flow over virtual qube-shaped barriers under neutrally buoyant conditions. We explicitly tested the effects of the numerical representation of volume restriction, independent of the effects of the leaf drag by comparing drag-only simulations, where we prescribed no volume or aperture restriction to the flow, restriction-only simulations, where we prescribed no drag, and control simulations, where both drag and volume plus aperture restriction were included. Our simulations show that representation of the effects of the volume and aperture restriction due to obstacles to flow is important (figure 1) and leads to differences in the

  3. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  4. Determination of the large scale volume weighted halo velocity bias in simulations

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-06-01

    A profound assumption in peculiar velocity cosmology is bv=1 at sufficiently large scales, where bv is the volume-weighted halo(galaxy) velocity bias with respect to the matter velocity field. However, this fundamental assumption has not been robustly verified in numerical simulations. Furthermore, it is challenged by structure formation theory (Bardeen, Bond, Kaiser and Szalay, Astrophys. J. 304, 15 (1986); Desjacques and Sheth, Phys. Rev D 81, 023526 (2010), which predicts the existence of velocity bias (at least for proto-halos) due to the fact that halos reside in special regions (local density peaks). The major obstacle to measuring the volume-weighted velocity from N-body simulations is an unphysical sampling artifact. It is entangled in the measured velocity statistics and becomes significant for sparse populations. With recently improved understanding of the sampling artifact (Zhang, Zheng and Jing, 2015, PRD; Zheng, Zhang and Jing, 2015, PRD), for the first time we are able to appropriately correct this sampling artifact and then robustly measure the volume-weighted halo velocity bias. (1) We verify bv=1 within 2% model uncertainty at k ≲0.1 h /Mpc and z =0 - 2 for halos of mass ˜1012- 1013h-1M⊙ and, therefore, consolidate a foundation for the peculiar velocity cosmology. (2) We also find statistically significant signs of bv≠1 at k ≳0.1 h /Mpc . Unfortunately, whether this is real or caused by a residual sampling artifact requires further investigation. Nevertheless, cosmology based on the k ≳0.1 h /Mpc velocity data should be careful with this potential velocity bias.

  5. A parallel finite volume algorithm for large-eddy simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Bui, Trong Tri

    1998-11-01

    A parallel unstructured finite volume algorithm is developed for large-eddy simulation of compressible turbulent flows. Major components of the algorithm include piecewise linear least-square reconstruction of the unknown variables, trilinear finite element interpolation for the spatial coordinates, Roe flux difference splitting, and second-order MacCormack explicit time marching. The computer code is designed from the start to take full advantage of the additional computational capability provided by the current parallel computer systems. Parallel implementation is done using the message passing programming model and message passing libraries such as the Parallel Virtual Machine (PVM) and Message Passing Interface (MPI). The development of the numerical algorithm is presented in detail. The parallel strategy and issues regarding the implementation of a flow simulation code on the current generation of parallel machines are discussed. The results from parallel performance studies show that the algorithm is well suited for parallel computer systems that use the message passing programming model. Nearly perfect parallel speedup is obtained on MPP systems such as the Cray T3D and IBM SP2. Performance comparison with the older supercomputer systems such as the Cray YMP show that the simulations done on the parallel systems are approximately 10 to 30 times faster. The results of the accuracy and performance studies for the current algorithm are reported. To validate the flow simulation code, a number of Euler and Navier-Stokes simulations are done for internal duct flows. Inviscid Euler simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle shows that the algorithm is capable of simultaneously tracking the very small disturbances of the acoustic wave and capturing the shock wave. Navier-Stokes simulations are made for fully developed laminar flow in a square duct, developing laminar flow in a

  6. Measurements of Elastic and Inelastic Properties under Simulated Earth's Mantle Conditions in Large Volume Apparatus

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.

    2012-12-01

    The interpretation of highly resolved seismic data from Earths deep interior require measurements of the physical properties of Earth's materials under experimental simulated mantle conditions. More than decade ago seismic tomography clearly showed subduction of crustal material can reach the core mantle boundary under specific circumstances. That means there is no longer space for the assumption deep mantle rocks might be much less complex than deep crustal rocks known from exhumation processes. Considering this geophysical high pressure research is faced the challenge to increase pressure and sample volume at the same time to be able to perform in situ experiments with representative complex samples. High performance multi anvil devices using novel materials are the most promising technique for this exciting task. Recent large volume presses provide sample volumes 3 to 7 orders of magnitude bigger than in diamond anvil cells far beyond transition zone conditions. The sample size of several cubic millimeters allows elastic wave frequencies in the low to medium MHz range. Together with the small and even adjustable temperature gradients over the whole sample this technique makes anisotropy and grain boundary effects in complex systems accessible for elastic and inelastic properties measurements in principle. The measurements of both elastic wave velocities have also no limits for opaque and encapsulated samples. The application of triple-mode transducers and the data transfer function technique for the ultrasonic interferometry reduces the time for saving the data during the experiment to about a minute or less. That makes real transient measurements under non-equilibrium conditions possible. A further benefit is, both elastic wave velocities are measured exactly simultaneously. Ultrasonic interferometry necessarily requires in situ sample deformation measurement by X-radiography. Time-resolved X-radiography makes in situ falling sphere viscosimetry and even the

  7. A computational error-assessment of central finite-volume discretizations in large-eddy simulation using a Smagorinsky model

    NASA Astrophysics Data System (ADS)

    Meyers, J.; Geurts, B. J.; Sagaut, P.

    2007-11-01

    We present a framework for the computational assessment and comparison of large-eddy simulation methods. We apply this to large-eddy simulation of homogeneous isotropic decaying turbulence using a Smagorinsky subgrid model and investigate the combined effect of discretization and model errors at coarse subgrid resolutions. We compare four different central finite-volume methods. These discretization methods arise from the four possible combinations that can be made with a second-order and a fourth-order central scheme for either the convective and the viscous fluxes. By systematically varying the simulation resolution and the Smagorinsky coefficient, we determine parameter regions for which a desired number of flow properties is simultaneously predicted with approximately minimal error. We include both physics-based and mathematics-based error definitions, leading to different error-measures designed to emphasize either errors in large- or in small-scale flow properties. It is shown that the evaluation of simulations based on a single physics-based error may lead to inaccurate perceptions on quality. We demonstrate however that evaluations based on a range of errors yields robust conclusions on accuracy, both for physics-based and mathematics-based errors. Parameter regions where all considered errors are simultaneously near-optimal are referred to as 'multi-objective optimal' parameter regions. The effects of discretization errors are particularly important at marginal spatial resolution. Such resolutions reflect local simulation conditions that may also be found in parts of more complex flow simulations. Under these circumstances, the asymptotic error-behavior as expressed by the order of the spatial discretization is no longer characteristic for the total dynamic consequences of discretization errors. We find that the level of overall simulation errors for a second-order central discretization of both the convective and viscous fluxes (the '2-2' method), and the

  8. Resolving the Effects of Aperture and Volume Restriction of the Flow by Semi-Porous Barriers Using Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Chatziefstratiou, Efthalia K.; Velissariou, Vasilia; Bohrer, Gil

    2014-09-01

    The Regional Atmospheric Modelling System (RAMS)-based Forest Large-Eddy Simulation (RAFLES) model is used to simulate the effects of large rectangular prism-shaped semi-porous barriers of varying densities under neutrally buoyant conditions. RAFLES model resolves flows inside and above forested canopies and other semi-porous barriers, and it accounts for barrier-induced drag on the flow and surface flux exchange between the barrier and the air. Unlike most other models, RAFLES model also accounts for the barrier-induced volume and aperture restriction via a modified version of the cut-cell coordinate system. We explicitly tested the effects of the numerical representation of volume restriction, independent of the effects of the drag, by comparing drag-only simulations (where we prescribed neither volume nor aperture restrictions to the flow), restriction-only simulations (where we prescribed no drag), and control simulations where both drag and volume plus aperture restrictions were included. Previous modelling and empirical work have revealed the development of important areas of increased uplift upwind of forward-facing steps, and recirculation zones downwind of backward-facing steps. Our simulations show that representation of the effects of the volume and aperture restriction due to the presence of semi-porous barriers leads to differences in the strengths and locations of increased-updraft and recirculation zones, and the length and strength of impact and adjustment zones when compared to simulation solutions with a drag-only representation. These are mostly driven by differences to the momentum budget of the streamwise wind velocity by resolved turbulence and pressure gradient fields around the front and back edges of the barrier. We propose that volume plus aperture restriction is an important component of the flow system in semi-porous environments such as forests and cities and should be considered by large-eddy simulation (LES).

  9. Guidelines for Volume Force Distributions Within Actuator Line Modeling of Wind Turbines on Large-Eddy Simulation-Type Grids

    SciTech Connect

    Jha, Pankaj K.; Churchfield, Matthew J.; Moriarty, Patrick J.; Schmitz, Sven

    2014-01-10

    The objective of this work is to develop and test a set of general guidelines for choosing parameters to be used in the state-of-the-art actuator line method (ALM) for modeling wind turbine blades in computational fluid dynamics (CFD). The actuator line method is being increasingly used for the computation of wake interactions in large wind farms in which fully blade-resolving simulations are expensive and require complicated rotating meshes. The focus is on actuator line behavior using fairly isotropic grids of low aspect ratio typically used for large-eddy simulation (LES). Forces predicted along the actuator lines need to be projected onto the flow field as body forces, and this is commonly accomplished using a volumetric projection. In this study, particular attention is given to the spanwise distribution of the radius of this projection. A new method is proposed where the projection radius varies along the blade span following an elliptic distribution. The proposed guidelines for actuator line parameters are applied to the National Renewable Energy Laboratory's (NREL's) Phase VI rotor and the NREL 5-MW turbine. Results obtained are compared with available data and the blade-element code XTurb-PSU. It is found that the new criterion for the projection radius leads to improved prediction of blade tip loads for both blade designs.

  10. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L. |; Rickert, M. |

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  11. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  12. A new development of the dynamic procedure in large-eddy simulation based on a Finite Volume integral approach. Application to stratified turbulence

    NASA Astrophysics Data System (ADS)

    Denaro, Filippo Maria; de Stefano, Giuliano

    2011-10-01

    A Finite Volume-based large-eddy simulation method is proposed along with a suitable extension of the dynamic modelling procedure that takes into account for the integral formulation of the governing filtered equations. Discussion about the misleading interpretation of FV in some literature is addressed. Then, the classical Germano identity is congruently rewritten in such a way that the determination of the modelling parameters does not require any arbitrary averaging procedure and thus retains a fully local character. The numerical modelling of stratified turbulence is the specific problem considered in this study, as an archetypal of simple geophysical flows. The original scaling formulation of the dynamic sub-grid scale model proposed by Wong and Lilly (Phys. Fluids 6(6), 1994) is suitably extended to the present integral formulation. This approach is preferred with respect to traditional ones since the eddy coefficients can be independently computed by avoiding the addition of unjustified buoyancy production terms in the constitutive equations. Simple scaling arguments allow us not to use the equilibrium hypothesis according to which the dissipation rate should equal the sub-grid scale energy production. A careful a priori analysis of the relevance of the test filter shape as well as the filter-to-grid ratio is reported. Large-eddy simulation results are a posteriori compared with a reference pseudo-spectral direct numerical solution that is suitably post-filtered in order to have a meaningful comparison. In particular, the spectral distribution of kinetic and thermal energy as well as the viscosity and diffusivity sub-grid scale profiles are illustrated. The good performances of the proposed method, in terms of both evolutions of global quantities and statistics, are very promising for the future development and application of the method.

  13. Applied large eddy simulation.

    PubMed

    Tucker, Paul G; Lardeau, Sylvain

    2009-07-28

    Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity.

  14. Large Eddy Simulation of Bubbly Flow and Slag Layer Behavior in Ladle with Discrete Phase Model (DPM)-Volume of Fluid (VOF) Coupled Model

    NASA Astrophysics Data System (ADS)

    Li, Linmin; Liu, Zhongqiu; Cao, Maoxue; Li, Baokuan

    2015-07-01

    In the ladle metallurgy process, the bubble movement and slag layer behavior is very important to the refining process and steel quality. For the bubble-liquid flow, bubble movement plays a significant role in the phase structure and causes the unsteady complex turbulent flow pattern. This is one of the most crucial shortcomings of the current two-fluid models. In the current work, a one-third scale water model is established to investigate the bubble movement and the slag open-eye formation. A new mathematical model using the large eddy simulation (LES) is developed for the bubble-liquid-slag-air four-phase flow in the ladle. The Eulerian volume of fluid (VOF) model is used for tracking the liquid-slag-air free surfaces and the Lagrangian discrete phase model (DPM) is used for describing the bubble movement. The turbulent liquid flow is induced by bubble-liquid interactions and is solved by LES. The procedure of bubble coming out of the liquid and getting into the air is modeled using a user-defined function. The results show that the present LES-DPM-VOF coupled model is good at predicting the unsteady bubble movement, slag eye formation, interface fluctuation, and slag entrainment.

  15. Large volume manufacture of dymalloy

    SciTech Connect

    1998-06-22

    The purpose of this research was to test the commercial viability and feasibility of Dymalloy, a composite material to measure thermal conductivity. Dymalloy was developed as part of a CRADA with Sun Microsystems. Sun Microsystems was a potential end user of Dymalloy as a substrate for MCMS. Sun had no desire to be involved in the manufacture of this material. The goal of this small business CRADA with Spectra Mat was to establish the high volume commercial manufacturing industry source for Dymalloy required by an end-user such as Sun Microsystems. The difference between the fabrication technique developed during the CRADA and this proposed work related to the mechanical technique of coating the diamond powder. Mechanical parts for the high-volume diamond powder coating process existed; however, they needed to be installed in an existing coating system for evaluation. Sputtering systems similar to the one required for this project were available at LLNL. Once the diamond powder was coated, both LLNL and Spectra Mat could make and test the Dymalloy composites. Spectra Mat manufactured Dymalloy composites in order to evaluate and establish a reasonable cost estimate on their existing processing capabilities. This information was used by Spectra Mat to define the market and cost-competitive products that could be commercialized from this new substrate material.

  16. Large-scale circuit simulation

    NASA Astrophysics Data System (ADS)

    Wei, Y. P.

    1982-12-01

    The simulation of VLSI (Very Large Scale Integration) circuits falls beyond the capabilities of conventional circuit simulators like SPICE. On the other hand, conventional logic simulators can only give the results of logic levels 1 and 0 with the attendent loss of detail in the waveforms. The aim of developing large-scale circuit simulation is to bridge the gap between conventional circuit simulation and logic simulation. This research is to investigate new approaches for fast and relatively accurate time-domain simulation of MOS (Metal Oxide Semiconductors), LSI (Large Scale Integration) and VLSI circuits. New techniques and new algorithms are studied in the following areas: (1) analysis sequencing (2) nonlinear iteration (3) modified Gauss-Seidel method (4) latency criteria and timestep control scheme. The developed methods have been implemented into a simulation program PREMOS which could be used as a design verification tool for MOS circuits.

  17. LARGE BUILDING HVAC SIMULATION

    EPA Science Inventory

    The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...

  18. LARGE BUILDING HVAC SIMULATION

    EPA Science Inventory

    The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...

  19. Hierarchical simulation of large system

    NASA Technical Reports Server (NTRS)

    Saab, Daniel G.

    1991-01-01

    The main problem facing current CAD tools for VLSIs is the large amount of memory required when dealing with large systems, primarily due to the circuit representation used by most current tools. This paper discusses an approach for hierarchical switch-level simulation of digital circuits. The approach exploits the hierarchy to reduce the memory requirements of the simulation, allowing the simulation of circuits that are too large to simulate at one flat level. The approach has been implemented in a hierarchical switch-level simulator, CHAMP, which runs on a SUN workstation. The program performs mixed mode simulation: parts of the circuit can be simulated faster at a behavioral level by supplying a high level software description. CHAMP allows assignable delays, and bidirectional signal flow inside circuit blocks that are represented as transistor networks as well as across the boundaries of higher level blocks. CHAMP is also unique in that it simulates directly from the hierarchical circuit description without flattening to a single level.

  20. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  1. Plasmoids formation in a laboratory and large-volume flux closure during simulations of Coaxial Helicity Injection in NSTX-U

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Fatima

    2016-10-01

    In NSTX-U, transient Coaxial Helicity Injection (CHI) is the primary method for current generation without reliance on the solenoid. A CHI discharge is generated by driving current along open field lines (the injector flux) that connect the inner and outer divertor plates on NSTX/NSTX-U, and has generated over 200 kA of toroidal current on closed flux surfaces in NSTX. Extrapolation of the concept to larger devices requires an improved understanding of the physics of flux closure and the governing parameters that maximizes the fraction of injected flux that is converted to useful closed flux. Here, through comprehensive resistive MHD NIMROD simulations conducted for the NSTX and NSTX-U geometries, two new major findings will be reported. First, formation of an elongated Sweet-Parker current sheet and a transition to plasmoid instability has for the first time been demonstrated by realistic global simulations. This is the first observation of plasmoid instability in a laboratory device configuration predicted by realistic MHD simulations and then supported by experimental camera images from NSTX. Second, simulations have now, for the first time, been able to show large fraction conversion of injected open flux to closed flux in the NSTX-U geometry. Consistent with the experiment, simulations also show that reconnection could occur at every stage of the helicity injection phase. The influence of 3D effects, and the parameter range that supports these important new findings is now being studied to understand the impact of toroidal magnetic field and the electron temperature, both of which are projected to increase in larger ST devices. Work supported by DOE DE-SC0010565.

  2. Large volume flow-through scintillating detector

    DOEpatents

    Gritzo, Russ E.; Fowler, Malcolm M.

    1995-01-01

    A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

  3. Mesoscale Ocean Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank

    2015-11-01

    The highest resolution global climate models (GCMs) can now resolve the largest scales of mesoscale dynamics in the ocean. This has the potential to increase the fidelity of GCMs. However, the effects of the smallest, unresolved, scales of mesoscale dynamics must still be parametrized. One such family of parametrizations are mesoscale ocean large eddy simulations (MOLES), but the effects of including MOLES in a GCM are not well understood. In this presentation, several MOLES schemes are implemented in a mesoscale-resolving GCM (CESM), and the resulting flow is compared with that produced by more traditional sub-grid parametrizations. Large eddy simulation (LES) is used to simulate flows where the largest scales of turbulent motion are resolved, but the smallest scales are not resolved. LES has traditionally been used to study 3D turbulence, but recently it has also been applied to idealized 2D and quasi-geostrophic (QG) turbulence. The MOLES presented here are based on 2D and QG LES schemes.

  4. Safety considerations in large-volume lipoplasty.

    PubMed

    Giese, S Y

    2001-11-01

    Proper patient selection, diligent fluid management, and attention to body temperature are important safety considerations in large-volume lipoplasty (LVL). Complications related to fluid overload, lidocaine toxicity, coagulopathies, and lengthy combined surgical procedures are preventable and not directly linked to LVL technique. Benefits as well as morbidity and mortality from LVL can be weighed against risk factors such as obesity, a prediabetic condition, and/or adverse effects of weight-loss medications. The author describes how she incorporates safeguards into her LVL procedures. (Aesthetic Surg J 2001;21:545-548.).

  5. Progressive volume rendering of large unstructured grids.

    PubMed

    Callahan, Steven P; Bavoil, Louis; Pascucci, Valerio; Silva, Cláudio T

    2006-01-01

    We describe a new progressive technique that allows real-time rendering of extremely large tetrahedral meshes. Our approach uses a client-server architecture to incrementally stream portions of the mesh from a server to a client which refines the quality of the approximate rendering until it converges to a full quality rendering. The results of previous steps are re-used in each subsequent refinement, thus leading to an efficient rendering. Our novel approach keeps very little geometry on the client and works by refining a set of rendered images at each step. Our interactive representation of the dataset is efficient, light-weight, and high quality. We present a framework for the exploration of large datasets stored on a remote server with a thin client that is capable of rendering and managing full quality volume visualizations.

  6. Large mode-volume, large beta, photonic crystal laser resonator

    SciTech Connect

    Dezfouli, Mohsen Kamandar; Dignam, Marc M.

    2014-12-15

    We propose an optical resonator formed from the coupling of 13, L2 defects in a triangular-lattice photonic crystal slab. Using a tight-binding formalism, we optimized the coupled-defect cavity design to obtain a resonator with predicted single-mode operation, a mode volume five times that of an L2-cavity mode and a beta factor of 0.39. The results are confirmed using finite-difference time domain simulations. This resonator is very promising for use as a single mode photonic crystal vertical-cavity surface-emitting laser with high saturation output power compared to a laser consisting of one of the single-defect cavities.

  7. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  8. LARGE volume string compactifications at finite temperature

    SciTech Connect

    Anguelova, Lilia; Calò, Vincenzo; Cicoli, Michele E-mail: v.calo@qmul.ac.uk

    2009-10-01

    We present a detailed study of the finite-temperature behaviour of the LARGE Volume type IIB flux compactifications. We show that certain moduli can thermalise at high temperatures. Despite that, their contribution to the finite-temperature effective potential is always negligible and the latter has a runaway behaviour. We compute the maximal temperature T{sub max}, above which the internal space decompactifies, as well as the temperature T{sub *}, that is reached after the decay of the heaviest moduli. The natural constraint T{sub *} < T{sub max} implies a lower bound on the allowed values of the internal volume V. We find that this restriction rules out a significant range of values corresponding to smaller volumes of the order V ∼ 10{sup 4}l{sub s}{sup 6}, which lead to standard GUT theories. Instead, the bound favours values of the order V ∼ 10{sup 15}l{sub s}{sup 6}, which lead to TeV scale SUSY desirable for solving the hierarchy problem. Moreover, our result favours low-energy inflationary scenarios with density perturbations generated by a field, which is not the inflaton. In such a scenario, one could achieve both inflation and TeV-scale SUSY, although gravity waves would not be observable. Finally, we pose a two-fold challenge for the solution of the cosmological moduli problem. First, we show that the heavy moduli decay before they can begin to dominate the energy density of the Universe. Hence they are not able to dilute any unwanted relics. And second, we argue that, in order to obtain thermal inflation in the closed string moduli sector, one needs to go beyond the present EFT description.

  9. SUSY's Ladder: reframing sequestering at Large Volume

    NASA Astrophysics Data System (ADS)

    Reece, Matthew; Xue, Wei

    2016-04-01

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague other supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. This gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.

  10. Comments on large-N volume independence

    SciTech Connect

    Poppitz, Erich; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2010-06-02

    We study aspects of the large-N volume independence on R{sup 3} X L{sup {Gamma}}, where L{sup {Gamma}} is a {Gamma}site lattice for Yang-Mills theory with adjoint Wilson-fermions. We find the critical number of lattice sites above which the center-symmetry analysis on L{sup {Gamma}} agrees with the one on the continuum S{sup 1}. For Wilson parameter set to one and {Gamma}{>=}2, the two analyses agree. One-loop radiative corrections to Wilson-line masses are finite, reminiscent of the UV-insensitivity of the Higgs mass in deconstruction/Little-Higgs theories. Even for theories with {Gamma}=1, volume independence in QCD(adj) may be guaranteed to work by tuning one low-energy effective field theory parameter. Within the parameter space of the theory, at most three operators of the 3d effective field theory exhibit one-loop UV-sensitivity. This opens the analytical prospect to study 4d non-perturbative physics by using lower dimensional field theories (d=3, in our example).

  11. Why Matter Occupies so Large a Volume?

    NASA Astrophysics Data System (ADS)

    E. B., Manoukian

    2013-12-01

    The paper represents a rigorous treatment of the underlying quantum theory, not just in words but providing the underlying technical details, as to why matter occupies so large a volume and its intimate connection with the Pauli exclusion principle, as more and more matter is put together, as well as of the contraction or shrinkage of “bosonic matter”, upon collapse, for which the Pauli exclusion is abolished. From the derived explicit bounds of integrals of powers of the particle number densities, explicit bounds on probabilities of the occurrences of the events just described are extracted. These probabilities lead one to infer the change of the “size” or extension of such matter, upon expansion or contraction, respectively, as their content is increased.

  12. Large area pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Kruer, Mark A. (Inventor)

    1999-01-01

    An advanced solar simulator illuminates the surface a very large solar array, such as one twenty feet by twenty feet in area, from a distance of about twenty-six feet with an essentially uniform intensity field of pulsed light of an intensity of one AMO, enabling the solar array to be efficiently tested with light that emulates the sun. Light modifiers sculpt a portion of the light generated by an electrically powered high power Xenon lamp and together with direct light from the lamp provide uniform intensity illumination throughout the solar array, compensating for the square law and cosine law reduction in direct light intensity, particularly at the corner locations of the array. At any location within the array the sum of the direct light and reflected light is essentially constant.

  13. Large Eddy Simulations in Astrophysics

    NASA Astrophysics Data System (ADS)

    Schmidt, Wolfram

    2015-12-01

    In this review, the methodology of large eddy simulations (LES) is introduced and applications in astrophysics are discussed. As theoretical framework, the scale decomposition of the dynamical equations for neutral fluids by means of spatial filtering is explained. For cosmological applications, the filtered equations in comoving coordinates are also presented. To obtain a closed set of equations that can be evolved in LES, several subgrid-scale models for the interactions between numerically resolved and unresolved scales are discussed, in particular the subgrid-scale turbulence energy equation model. It is then shown how model coefficients can be calculated, either by dynamic procedures or, a priori, from high-resolution data. For astrophysical applications, adaptive mesh refinement is often indispensable. It is shown that the subgrid-scale turbulence energy model allows for a particularly elegant and physically well-motivated way of preserving momentum and energy conservation in adaptive mesh refinement (AMR) simulations. Moreover, the notion of shear-improved models for in-homogeneous and non-stationary turbulence is introduced. Finally, applications of LES to turbulent combustion in thermonuclear supernovae, star formation and feedback in galaxies, and cosmological structure formation are reviewed.

  14. Efficiency calibration and coincidence summing correction for a large volume (946cm(3)) LaBr3(Ce) detector: GEANT4 simulations and experimental measurements.

    PubMed

    Dhibar, M; Mankad, D; Mazumdar, I; Kumar, G Anil

    2016-12-01

    The paper describes the studies on efficiency calibration and coincidence summing correction for a 3.5″×6″ cylindrical LaBr3(Ce)detector. GEANT4 simulations were made with point sources, namely, (60)Co, (94)Nb, (24)Na, (46)Sc and (22)Na. The simulated efficiencies, extracted using (60)Co, (94)Nb, (24)Na and (46)Sc that emit coincident gamma rays with same decay intensities, were corrected for coincidence summing by applying the method proposed by Vidmar et al. (2003). The method was applied, for the first time, for correcting the simulated efficiencies extracted using (22)Na that emits coincident gamma rays with different decay intensities. The measured results obtained using (60)Co and (22)Na were found to be in good agreement with simulated results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces

    SciTech Connect

    Makarov, A. N.

    2015-09-15

    Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.

  16. Large-eddy simulation of propeller noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Mahesh, Krishnan

    2016-11-01

    We will discuss our ongoing work towards developing the capability to predict far field sound from the large-eddy simulation of propellers. A porous surface Ffowcs-Williams and Hawkings (FW-H) acoustic analogy, with a dynamic endcapping method (Nitzkorski and Mahesh, 2014) is developed for unstructured grids in a rotating frame of reference. The FW-H surface is generated automatically using Delaunay triangulation and is representative of the underlying volume mesh. The approach is validated for tonal trailing edge sound from a NACA 0012 airfoil. LES of flow around a propeller at design advance ratio is compared to experiment and good agreement is obtained. Results for the emitted far field sound will be discussed. This work is supported by ONR.

  17. Finite volume hydromechanical simulation in porous media.

    PubMed

    Nordbotten, Jan Martin

    2014-05-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media.

  18. Finite volume hydromechanical simulation in porous media

    PubMed Central

    Nordbotten, Jan Martin

    2014-01-01

    Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media. PMID:25574061

  19. Scalar excursions in large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Matheou, Georgios; Dimotakis, Paul E.

    2016-12-01

    The range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods for diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size

  20. Large-volume sampling and preconcentration for trace explosives detection.

    SciTech Connect

    Linker, Kevin Lane

    2004-05-01

    A trace explosives detection system typically contains three subsystems: sample collection, preconcentration, and detection. Sample collection of trace explosives (vapor and particulate) through large volumes of airflow helps reduce sampling time while increasing the amount of dilute sample collected. Preconcentration of the collected sample before introduction into the detector improves the sensitivity of the detector because of the increase in sample concentration. By combining large-volume sample collection and preconcentration, an improvement in the detection of explosives is possible. Large-volume sampling and preconcentration is presented using a systems level approach. In addition, the engineering of large-volume sampling and preconcentration for the trace detection of explosives is explained.

  1. Large space systems technology, 1980, volume 1

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    The technological and developmental efforts in support of the large space systems technology are described. Three major areas of interests are emphasized: (1) technology pertient to large antenna systems; (2) technology related to large space systems; and (3) activities that support both antenna and platform systems.

  2. The persistence of the large volumes in black holes

    NASA Astrophysics Data System (ADS)

    Ong, Yen Chin

    2015-08-01

    Classically, black holes admit maximal interior volumes that grow asymptotically linearly in time. We show that such volumes remain large when Hawking evaporation is taken into account. Even if a charged black hole approaches the extremal limit during this evolution, its volume continues to grow; although an exactly extremal black hole does not have a "large interior". We clarify this point and discuss the implications of our results to the information loss and firewall paradoxes.

  3. Volume Independence in Large Nc QCD-like Gauge Theories

    SciTech Connect

    Kovtun, Pavel; Unsal, Mithat; Yaffe, Laurence G.

    2007-02-06

    Volume independence in large N{sub c} gauge theories may be viewed as a generalized orbifold equivalence. The reduction to zero volume (or Eguchi-Kawai reduction) is a special case of this equivalence. So is temperature independence in confining phases. A natural generalization concerns volume independence in ''theory space'' of quiver gauge theories. In pure Yang-Mills theory, the failure of volume independence for sufficiently small volumes (at weak coupling) due to spontaneous breaking of center symmetry, together with its validity above a critical size, nicely illustrate the symmetry realization conditions which are both necessary and sufficient for large N{sub c} orbifold equivalence. The existence of a minimal size below which volume independence fails also applies to Yang-Mills theory with antisymmetric representation fermions [QCD(AS)]. However, in Yang-Mills theory with adjoint representation fermions [QCD(Adj)], endowed with periodic boundary conditions, volume independence remains valid down to arbitrarily small size. In sufficiently large volumes, QCD(Adj) and QCD(AS) have a large N{sub c} ''orientifold'' equivalence, provided charge conjugation symmetry is unbroken in the latter theory. Therefore, via a combined orbifold-orientifold mapping, a well-defined large N{sub c} equivalence exists between QCD(AS) in large, or infinite, volume and QCD(Adj) in arbitrarily small volume. Since asymptotically free gauge theories, such as QCD(Adj), are much easier to study (analytically or numerically) in small volume, this equivalence should allow greater understanding of large N{sub c} QCD in infinite volume.

  4. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  5. Technologies for imaging neural activity in large volumes

    PubMed Central

    Ji, Na; Freeman, Jeremy; Smith, Spencer L.

    2017-01-01

    Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194

  6. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  7. Large volume continuous counterflow dialyzer has high efficiency

    NASA Technical Reports Server (NTRS)

    Mandeles, S.; Woods, E. C.

    1967-01-01

    Dialyzer separates macromolecules from small molecules in large volumes of solution. It takes advantage of the high area/volume ratio in commercially available 1/4-inch dialysis tubing and maintains a high concentration gradient at the dialyzing surface by counterflow.

  8. Sparticle spectra from Large-Volume String Compactifications

    SciTech Connect

    Conlon, Joseph P.

    2007-11-20

    Large-volume models are a promising approach to stabilising moduli and generating the weak hierarchy through TeV-supersymmetry. I describe the pattern of sparticle mass spectra that arises in these models.

  9. Coherent motility measurements of biological objects in a large volume

    NASA Astrophysics Data System (ADS)

    Ebersberger, J.; Weigelt, G.; Li, Yajun

    1986-05-01

    We have performed space-time intensity cross-correlation measurements of boiling image plane speckle interferograms to investigate the motility of a large number of small biological objects. Experiments were carried out with Artemia Salina species at various water temperatures. The advantage of this method is the fact that many objects in a large volume can be measured simultaneously.

  10. Large Interface Simulation in Multiphase Flow Phenomena

    SciTech Connect

    Henriques, Aparicio; Coste, Pierre; Pigny, Sylvain; Magnaudet, Jacques

    2006-07-01

    An attempt to represent multiphase multi-scale flow, filling the gap between Direct Numerical Simulation (DNS) and averaged approaches, is the purpose of this paper. We present a kind of Large Interface (LI) simulation formalism obtained after a filtering process on local instantaneous conservation equations of the two-fluid model which distinguishes between small scales and large scales contributions. LI surface tension force is also taken into account. Small scale dynamics call for modelization and large scale for simulation. Joined to this formalism, a criterion to recognize LI's is developed. It is used in an interface recognition algorithm which is qualified on a sloshing case and a bubble oscillation under zero-gravity. This method is applied to a rising bubble in a pool that collapses at a free surface and to a square-base basin experiment where splashing and sloshing at the free surface are the main break-up phenomena. (authors)

  11. Indian LSSC (Large Space Simulation Chamber) facility

    NASA Technical Reports Server (NTRS)

    Brar, A. S.; Prasadarao, V. S.; Gambhir, R. D.; Chandramouli, M.

    1988-01-01

    The Indian Space Agency has undertaken a major project to acquire in-house capability for thermal and vacuum testing of large satellites. This Large Space Simulation Chamber (LSSC) facility will be located in Bangalore and is to be operational in 1989. The facility is capable of providing 4 meter diameter solar simulation with provision to expand to 4.5 meter diameter at a later date. With such provisions as controlled variations of shroud temperatures and availability of infrared equipment as alternative sources of thermal radiation, this facility will be amongst the finest anywhere. The major design concept and major aspects of the LSSC facility are presented here.

  12. New Large Volume Press Beamlines at the Canadian Light Source

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.; Hormes, J.; Lauterjung, J.; Secco, R.; Hallin, E.

    2013-12-01

    The Canadian Light Source, the German Research Centre for Geosciences and the Western University recently agreed to establish two new large volume press beamlines at the Canadian Lightsource. As the first step a 250 tons DIA-LVP will be installed at the IDEAS beamline in 2014. The further development is associated with the construction of a superconducting wiggler beamline at the Brockhouse sector. A 1750 tons DIA LVP will be installed there about 2 years later. Up to the completion of this wiggler beamline the big press will be used for offline high pressure high temperature experiments under simulated Earth's mantle conditions. In addition to X-ray diffraction, all up-to-date high pressure techniques as ultrasonic interferometry, deformation analyses by X-radiography, X-ray densitometry, falling sphere viscosimetry, multi-staging etc. will be available at both beamlines. After the required commissioning the beamlines will be open to the worldwide user community from Geosciences, general material sciences, physics, chemistry, biology etc. based on the evaluation and ranking of the submitted user proposals by an international review panel.

  13. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  14. Two-phase flows simulation in closed volume

    NASA Astrophysics Data System (ADS)

    Fedorov, A. V.; Lavruk, S. A.

    2016-10-01

    In this paper gas flow field was considered in the model volumes that correspond to real experimental ones. During simulation flow fields were defined in volumes, matching of the flow fields in different volumes and comparison of the velocity values along the plate that models fuel tank element was done.

  15. REXOR 2 rotorcraft simulation model. Volume 1: Engineering documentation

    NASA Technical Reports Server (NTRS)

    Reaser, J. S.; Kretsinger, P. H.

    1978-01-01

    A rotorcraft nonlinear simulation called REXOR II, divided into three volumes, is described. The first volume is a development of rotorcraft mechanics and aerodynamics. The second is a development and explanation of the computer code required to implement the equations of motion. The third volume is a user's manual, and contains a description of code input/output as well as operating instructions.

  16. Simulator for Large-scale Planetary and Terrestrial Radar Sounding

    NASA Astrophysics Data System (ADS)

    Haynes, M.; Schroeder, D. M.; Duan, X.; Arumugam, D.; McMichael, J. G.; Hensley, S.; Cwik, T. A.

    2016-12-01

    We are developing a radar sounding simulation tool that can simulate radar scattering from large-scale, heterogeneous sub-surfaces for all existing, proposed, and future potential planetary and terrestrial sounder missions for Mars, Venus, Earth (e.g., atmosphere, ice sheets), Europa, Ganymede, Enceladus or other icy planetary bodies. This tool will be the first of its kind in planetary and terrestrial radar sounding simulation to support system engineering and to test scientific observables. No extant radar simulator is capable of producing echoes with realistic phase histories, heterogeneous media propagation effects, and processing gains at the spatial scales of planetary or terrestrial radar sounding (e.g., computational subsurface volumes of 10,000s of wavelengths in three dimensions at sounding frequencies of 5-100 MHz). Today's radar point target simulators are fast, but do not model transmission and propagation through heterogeneous dielectric media. We present progress on two simulation modules aimed at addressing different regimes of the sounding scattering problem: the Pseudo-Spectral Time-Domain (PSTD) for scattering from shallow subsurface dielectric heterogeneities, and the Multi-layer Fast Multipole Method for scattering from deep, large-scale dielectric interfaces. We will show simulated radargrams and compare computation times for realistic radar sounding scenes, in addition we solicit community input for this tool and outline the development path.

  17. Large-eddy simulation of compressible turbulence

    NASA Technical Reports Server (NTRS)

    Squires, Kyle D.

    1991-01-01

    The increase in the range of length scales with increasing Reynolds number limits the direct simulation of turbulent flows to relatively simple geometries and low Reynolds numbers. However, since most flows of engineering interest occur at much higher Reynolds number than is currently within the capabilities of full simulation, prediction of these flow fields can only be obtained by solving some suitably-averaged set of governing equations. In the traditional Reynolds-averaged approach, the Navier-Stokes equations are averaged over time. This in turn yields correlations between various turbulence fluctuations. It is these terms, e.g. the Reynolds stresses, for which a turbulence model must be derived. Turbulence modeling of incompressible flows has received a great amount of attention in the literature. An area of research that has received comparatively less attention is the modeling of compressible turbulent flows. An approach to simulating compressible turbulence at high Reynolds numbers is through the use of Large-Eddy Simulation (LES). In LES the dependent variables are decomposed into a large-scale (resolved) component and a sub-grid scale component. It is the small-scale components of the velocity field which are presumably more homogeneous than the large scales and, therefore, more easily modeled. Thus, it seems plausible that simpler models, which should be more universal in character than those employed in second-order closure schemes, may be developed for LES of compressible turbulence. The objective of the present research, therefore, is to explore models for the Large-Eddy Simulation of compressible turbulent flows. Given the recent successes of Zeman in second order closure modeling of compressible turbulence, model development was guided by principals employed in second-order closures.

  18. Large-Volume High-Pressure Mineral Physics in Japan

    NASA Astrophysics Data System (ADS)

    Liebermann, Robert C.; Prewitt, Charles T.; Weidner, Donald J.

    American high-pressure research with large sample volumes developed rapidly in the 1950s during the race to produce synthetic diamonds. At that time the piston cylinder, girdle (or belt), and tetrahedral anvil devices were invented. However, this development essentially stopped in the late 1950s, and while the diamond anvil cell has been used extensively in the United States with spectacular success for high-pressure experiments in small sample volumes, most of the significant technological advances in large-volume devices have taken place in Japan. Over the past 25 years, these technical advances have enabled a fourfold increase in pressure, with many important investigations of the chemical and physical properties of materials synthesized at high temperatures and pressures that cannot be duplicated with any apparatus currently available in the United States.

  19. Stroke volume variation as a guide for fluid resuscitation in patients undergoing large-volume liposuction.

    PubMed

    Jain, Anil Kumar; Khan, Asma M

    2012-09-01

    : The potential for fluid overload in large-volume liposuction is a source of serious concern. Fluid management in these patients is controversial and governed by various formulas that have been advanced by many authors. Basically, it is the ratio of what goes into the patient and what comes out. Central venous pressure has been used to monitor fluid therapy. Dynamic parameters, such as stroke volume and pulse pressure variation, are better predictors of volume responsiveness and are superior to static indicators, such as central venous pressure and pulmonary capillary wedge pressure. Stroke volume variation was used in this study to guide fluid resuscitation and compared with one guided by an intraoperative fluid ratio of 1.2 (i.e., Rohrich formula). : Stroke volume variation was used as a guide for intraoperative fluid administration in 15 patients subjected to large-volume liposuction. In another 15 patients, fluid resuscitation was guided by an intraoperative fluid ratio of 1.2. The amounts of intravenous fluid administered in the groups were compared. : The mean amount of fluid infused was 561 ± 181 ml in the stroke volume variation group and 2383 ± 1208 ml in the intraoperative fluid ratio group. The intraoperative fluid ratio when calculated for the stroke volume variation group was 0.936 ± 0.084. All patients maintained hemodynamic parameters (heart rate and systolic, diastolic, and mean blood pressure). Renal and metabolic indices remained within normal limits. : Stroke volume variation-guided fluid application could result in an appropriate amount of intravenous fluid use in patients undergoing large-volume liposuction. : Therapeutic, II.

  20. Large eddy simulation in the ocean

    NASA Astrophysics Data System (ADS)

    Scotti, Alberto

    2010-12-01

    Large eddy simulation (LES) is a relative newcomer to oceanography. In this review, both applications of traditional LES to oceanic flows and new oceanic LES still in an early stage of development are discussed. The survey covers LES applied to boundary layer flows, traditionally an area where LES has provided considerable insight into the physics of the flow, as well as more innovative applications, where new SGS closure schemes need to be developed. The merging of LES with large-scale models is also briefly reviewed.

  1. Large Eddy Simulation of Turbulent Combustion

    DTIC Science & Technology

    2006-03-15

    Application to an HCCI Engine . Proceedings of the 4th Joint Meeting of the U.S. Sections of the Combustion Institute, 2005. [34] K. Fieweger...LARGE EDDY SIMULATION OF TURBULENT COMBUSTION Principle Investigator: Heinz Pitsch Flow Physics and Computation Department of Mechanical Engineering ...burners and engines found in modern, industrially relevant equipment. In the course of this transition of LES from a scientifically interesting method

  2. Large eddy simulation - The next five years

    NASA Technical Reports Server (NTRS)

    Ferziger, J. H.

    1984-01-01

    The prospect of major improvements in the performance of computers in the next five years means that large eddy simulation (LES), which has until now been strictly a research tool, may become a top-of-the-line engineering tool. In this paper, the historical development and past contributions of LES are reviewed. Then a discussion of the potential for applications of LES in new areas and of the developments needed to make LES a tool for the practicing engineer is given.

  3. Large discharge-volume, silent discharge spark plug

    DOEpatents

    Kang, Michael

    1995-01-01

    A large discharge-volume spark plug for providing self-limiting microdischarges. The apparatus includes a generally spark plug-shaped arrangement of a pair of electrodes, where either of the two coaxial electrodes is substantially shielded by a dielectric barrier from a direct discharge from the other electrode, the unshielded electrode and the dielectric barrier forming an annular volume in which self-terminating microdischarges occur when alternating high voltage is applied to the center electrode. The large area over which the discharges occur, and the large number of possible discharges within the period of an engine cycle, make the present silent discharge plasma spark plug suitable for use as an ignition source for engines. In the situation, where a single discharge is effective in causing ignition of the combustible gases, a conventional single-polarity, single-pulse, spark plug voltage supply may be used.

  4. Large volume leukapheresis: Efficacy and safety of processing patient's total blood volume six times.

    PubMed

    Bojanic, Ines; Dubravcic, Klara; Batinic, Drago; Cepulic, Branka Golubic; Mazic, Sanja; Hren, Darko; Nemet, Damir; Labar, Boris

    2011-04-01

    Large-volume leukapheresis (LVL) differs from standard leukapheresis by increased blood flow and an altered anticoagulation regimen. An open issue is to what degree a further increase in processed blood volume is reasonable in terms of higher yields and safety. In 30 LVL performed in patients with hematologic malignancies, 6 total blood volumes were processed. LVL resulted in a higher CD34+ cell yield without a change in graft quality. Although a marked platelet decrease can be expected, LVL is safe and can be recommended as the standard procedure for patients who mobilize low numbers of CD34+ cells and when high number of CD34+ cells are required.

  5. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  6. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  7. Accessibility and Analysis to NASA's New Large Volume Missions

    NASA Astrophysics Data System (ADS)

    Hausman, J.; Gangl, M.; McAuley, J.; Toaz, R., Jr.

    2016-12-01

    Each new satellite mission continues to measure larger volumes of data than the last. This is especially true with the new NASA satellite missions NISAR and SWOT, launching in 2020 and 2021, which will produce petabytes of data a year. A major concern is how will users be able to analyze such volumes? This presentation will show how cloud storage and analysis can help overcome and accommodate multiple users' needs. While users may only need gigabytes of data for their research, the data center will need to leverage the processing power of the cloud to perform search and subsetting capabilities over the large volume of data. There is also a vast array of user types that require different tools and services to access and analyze the data. Some users need global data to run climate models, while others require small, dynamic regions with lots of analysis and transformations. There will also be a need to generate data that have different inputs or correction algorithms that the project may not be able to provide as those will be very specialized for specific regions or evolve quicker than what the project can reprocess. By having the data and tools side by side, users will be able to access the data they require and analyze it all in one place. By placing data in the cloud, users can analyze the data there, shifting the current "download and analyze" paradigm to "log-in and analyze". The cloud will provide adequate processing power needed to analyze large volumes of data, subset small regions over large volumes of data, and regenerate/reformat data to the specificity each user requires.

  8. Concentration of Enteroviruses from Large Volumes of Water

    PubMed Central

    Sobsey, Mark D.; Wallis, Craig; Henderson, Marilyn; Melnick, Joseph L.

    1973-01-01

    An improved method for concentrating viruses from large volumes of clean waters is described. It was found that, by acidification, viruses in large volumes of water could be efficiently adsorbed to epoxy-fiber-glass and nitrocellulose filters in the absence of exogenously added salts. Based upon this finding, a modified version of our previously described virus concentration system was developed for virus monitoring of clean waters. In this procedure the water being tested is acidified by injection of N HCl prior to passage through a virus adsorber consisting of a fiber-glass cartridge depth filter and an epoxy-fiber-glass membrane filter in series. The adsorbed viruses are then eluted with a 1-liter volume of pH 11.5 eluent and reconcentrated by adsorption to and elution from a small epoxy-fiber-glass filter series. With this method small quantities of poliovirus in 100-gallon (378.5-liter) volumes of tapwater were concentrated nearly 40,000-fold with an average virus recovery efficiency of 77%. Images PMID:16349972

  9. Simulating the focal volume effect: a quantitative analysis

    NASA Astrophysics Data System (ADS)

    Scarborough, Timothy D.; Uiterwaal, Cornelis J. G. J.

    2013-12-01

    We present quantitative simulations of the focal volume effect. Intensity distributions in detection volumes with two- and three-dimensional spatial resolution are calculated. Results include an analysis of translations of these volumes in the focus along the direction of laser propagation as well as discussion of varying sizes of the spatially resolved volumes. We find that detection volumes less than half the 1/e full-width beam waist and less than half the Rayleigh length along the propagation direction offer an optimal compromise of maintaining intensity resolution without sacrificing peak intensity.

  10. Large eddy simulations in 2030 and beyond

    PubMed Central

    Piomelli, U

    2014-01-01

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier–Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise. PMID:25024415

  11. Numerical simulation of large fabric filter

    NASA Astrophysics Data System (ADS)

    Sedláček, Jan; Kovařík, Petr

    2012-04-01

    Fabric filters are used in the wide range of industrial technologies for cleaning of incoming or exhaust gases. To achieve maximal efficiency of the discrete phase separation and long lifetime of the filter hoses, it is necessary to ensure uniform load on filter surface and to avoid impacts of heavy particles with high velocities to the filter hoses. The paper deals with numerical simulation of two phase flow field in a large fabric filter. The filter is composed of six chambers with approx. 1600 filter hoses in total. The model was simplified to one half of the filter, the filter hoses walls were substituted by porous zones. The model settings were based on experimental data, especially on the filter pressure drop. Unsteady simulations with different turbulence models were done. Flow field together with particles trajectories were analyzed. The results were compared with experimental observations.

  12. Large Eddy Simulation of turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Moin, P.; Mansour, N. N.; Reynolds, W. C.; Ferziger, J. H.

    1979-01-01

    The conceptual foundation underlying Large Eddy Simulation (LES) is summarized, and the numerical methods developed for simulation of the time-developing turbulent mixing layer and turbulent plane Poiseuille flow are discussed. Computational results show that the average Reynolds stress profile nearly attains the equilibrium shape which balances the downstream mean pressure gradient in the regions away from the walls. In the vicinity of the walls, viscous stresses are shown to be significant; together with the Reynolds stresses, these stresses balance the mean pressure gradient. It is stressed that the subgrid scale contribution to the total Reynolds stress is significant only in the vicinity of the walls. The continued development of LES is urged.

  13. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05 simulations, in particular, allow us to also study the role and impact of the nuclear symmetry energy on these pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  14. The Large Area Pulsed Solar Simulator (LAPSS)

    NASA Technical Reports Server (NTRS)

    Mueller, R. L.

    1993-01-01

    A Large Area Pulsed Solar Simulator (LAPSS) has been installed at JPL. It is primarily intended to be used to illuminate and measure the electrical performance of photovoltaic devices. The simulator, originally manufactured by Spectrolab, Sylmar, California, occupies an area measuring about 3 meters wide by 12 meters long. The data acquisition and data processing subsystems have been modernized. Tests on the LAPSS performance resulted in better than +/- 2 percent uniformity of irradiance at the test plane and better than +/- 0.3 percent measurement repeatability after warm-up. Glass absorption filters are used to reduce the level of ultraviolet light emitted from the xenon flash lamps. This provides a close match to standard airmass zero and airmass 1.5 spectral irradiance distributions. The 2 millisecond light pulse prevents heating of the device under test, resulting in more reliable temperature measurements. Overall, excellent electrical performance measurements have been made of many different types and sizes of photovoltaic devices.

  15. Large eddy simulations in 2030 and beyond.

    PubMed

    Piomelli, U

    2014-08-13

    Since its introduction, in the early 1970s, large eddy simulations (LES) have advanced considerably, and their application is transitioning from the academic environment to industry. Several landmark developments can be identified over the past 40 years, such as the wall-resolved simulations of wall-bounded flows, the development of advanced models for the unresolved scales that adapt to the local flow conditions and the hybridization of LES with the solution of the Reynolds-averaged Navier-Stokes equations. Thanks to these advancements, LES is now in widespread use in the academic community and is an option available in most commercial flow-solvers. This paper will try to predict what algorithmic and modelling advancements are needed to make it even more robust and inexpensive, and which areas show the most promise.

  16. Large volume multiple-path nuclear pumped laser

    NASA Technical Reports Server (NTRS)

    Hohl, F.; Deyoung, R. J. (Inventor)

    1981-01-01

    Large volumes of gas are excited by using internal high reflectance mirrors that are arranged so that the optical path crosses back and forth through the excited gaseous medium. By adjusting the external dielectric mirrors of the laser, the number of paths through the laser cavity can be varied. Output powers were obtained that are substantially higher than the output powers of previous nuclear laser systems.

  17. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  18. Renormalization group formulation of large eddy simulation

    NASA Technical Reports Server (NTRS)

    Yakhot, V.; Orszag, S. A.

    1985-01-01

    Renormalization group (RNG) methods are applied to eliminate small scales and construct a subgrid scale (SSM) transport eddy model for transition phenomena. The RNG and SSM procedures are shown to provide a more accurate description of viscosity near the wall than does the Smagorinski approach and also generate farfield turbulence viscosity values which agree well with those of previous researchers. The elimination of small scales causes the simultaneous appearance of a random force and eddy viscosity. The RNG method permits taking these into account, along with other phenomena (such as rotation) for large-eddy simulations.

  19. Large eddy simulations of compressible magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Grete, Philipp

    2017-02-01

    Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the

  20. EMBEDDING REALISTIC SURVEYS IN SIMULATIONS THROUGH VOLUME REMAPPING

    SciTech Connect

    Carlson, Jordan; White, Martin

    2010-10-15

    Connecting cosmological simulations to real-world observational programs is often complicated by a mismatch in geometry: while surveys often cover highly irregular cosmological volumes, simulations are customarily performed in a periodic cube. We describe a technique to remap this cube into elongated box-like shapes that are more useful for many applications. The remappings are one-to-one, volume-preserving, keep local structures intact, and involve minimal computational overhead.

  1. Acute leg volume changes in weightlessness and its simulation

    NASA Technical Reports Server (NTRS)

    Thornton, William E.; Uri, John J.; Hedge, Vickie; Coleman, Eugen; Moore, Thomas P.

    1992-01-01

    Leg volume changes were studied in six subjects during 150 min of horizontal, 6 deg headdown tilt and supine immersion. Results were compared to previously obtained space flight data. It is found that, at equivalent study times, the magnitude of the leg volume changes during the simulations was less than one half that seen during space flight. Relative and absolute losses from the upper leg were greater during space flight, while relative losses were greater from the lower leg during simulations.

  2. Developing large eddy simulation for turbomachinery applications.

    PubMed

    Eastwood, Simon J; Tucker, Paul G; Xia, Hao; Klostermeier, Christian

    2009-07-28

    For jets, large eddy resolving simulations are compared for a range of numerical schemes with no subgrid scale (SGS) model and for a range of SGS models with the same scheme. There is little variation in results for the different SGS models, and it is shown that, for schemes which tend towards having dissipative elements, the SGS model can be abandoned, giving what can be termed numerical large eddy simulation (NLES). More complex geometries are investigated, including coaxial and chevron nozzle jets. A near-wall Reynolds-averaged Navier-Stokes (RANS) model is used to cover over streak-like structures that cannot be resolved. Compressor and turbine flows are also successfully computed using a similar NLES-RANS strategy. Upstream of the compressor leading edge, the RANS layer is helpful in preventing premature separation. Capturing the correct flow over the turbine is particularly challenging, but nonetheless the RANS layer is helpful. In relation to the SGS model, for the flows considered, evidence suggests issues such as inflow conditions, problem definition and transition are more influential.

  3. The Large Area Pulsed Solar Simulator (LAPSS)

    NASA Technical Reports Server (NTRS)

    Mueller, R. L.

    1994-01-01

    The Large Area Pulsed Solar Simulator (LAPSS) has been installed at JPL. It is primarily intended to be used to illuminate and measure the electrical performance of photovoltaic devices. The simulator, originally manufactured by Spectrolab, Sylmar, CA, occupies an area measuring about 3 m wide x 12 m long. The data acquisition and data processing subsystems have been modernized. Tests on the LAPSS performance resulted in better than plus or minus 2 percent uniformity of irradiance at the test plane and better than plus or minus 0.3 percent measurement repeatability after warm-up. Glass absorption filters reduce the ultraviolet light emitted from the xenon flash lamps. This results in a close match to three different standard airmass zero and airmass 1.5 spectral irradiances. The 2-ms light pulse prevents heating of the device under test, resulting in more reliable temperature measurements. Overall, excellent electrical performance measurements have been made of many different types and sizes of photovoltaic devices. Since the original printing of this publication, in 1993, the LAPSS has been operational and new capabilities have been added. This revision includes a new section relating to the installation of a method to measure the I-V curve of a solar cell or array exhibiting a large effective capacitance. Another new section has been added relating to new capabilities for plotting single and multiple I-V curves, and for archiving the I-V data and test parameters. Finally, a section has been added regarding the data acquisition electronics calibration.

  4. Large volume high-pressure cell for inelastic neutron scattering

    NASA Astrophysics Data System (ADS)

    Wang, W.; Sokolov, D. A.; Huxley, A. D.; Kamenev, K. V.

    2011-07-01

    Inelastic neutron scattering measurements typically require two orders of magnitude longer data collection times and larger sample sizes than neutron diffraction studies. Inelastic neutron scattering measurements on pressurised samples are particularly challenging since standard high-pressure apparatus restricts sample volume, attenuates the incident and scattered beams, and contributes background scattering. Here, we present the design of a large volume two-layered piston-cylinder pressure cell with optimised transmission for inelastic neutron scattering experiments. The design and the materials selected for the construction of the cell enable its safe use to a pressure of 1.8 GPa with a sample volume in excess of 400 mm3. The design of the piston seal eliminates the need for a sample container, thus providing a larger sample volume and reduced absorption. The integrated electrical plug with a manganin pressure gauge offers an accurate measurement of pressure over the whole range of operational temperatures. The performance of the cell is demonstrated by an inelastic neutron scattering study of UGe2.

  5. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  6. Large volume high-pressure cell for inelastic neutron scattering.

    PubMed

    Wang, W; Sokolov, D A; Huxley, A D; Kamenev, K V

    2011-07-01

    Inelastic neutron scattering measurements typically require two orders of magnitude longer data collection times and larger sample sizes than neutron diffraction studies. Inelastic neutron scattering measurements on pressurised samples are particularly challenging since standard high-pressure apparatus restricts sample volume, attenuates the incident and scattered beams, and contributes background scattering. Here, we present the design of a large volume two-layered piston-cylinder pressure cell with optimised transmission for inelastic neutron scattering experiments. The design and the materials selected for the construction of the cell enable its safe use to a pressure of 1.8 GPa with a sample volume in excess of 400 mm(3). The design of the piston seal eliminates the need for a sample container, thus providing a larger sample volume and reduced absorption. The integrated electrical plug with a manganin pressure gauge offers an accurate measurement of pressure over the whole range of operational temperatures. The performance of the cell is demonstrated by an inelastic neutron scattering study of UGe(2).

  7. Large volume high-pressure cell for inelastic neutron scattering

    SciTech Connect

    Wang, W.; Kamenev, K. V.; Sokolov, D. A.; Huxley, A. D.

    2011-07-15

    Inelastic neutron scattering measurements typically require two orders of magnitude longer data collection times and larger sample sizes than neutron diffraction studies. Inelastic neutron scattering measurements on pressurised samples are particularly challenging since standard high-pressure apparatus restricts sample volume, attenuates the incident and scattered beams, and contributes background scattering. Here, we present the design of a large volume two-layered piston-cylinder pressure cell with optimised transmission for inelastic neutron scattering experiments. The design and the materials selected for the construction of the cell enable its safe use to a pressure of 1.8 GPa with a sample volume in excess of 400 mm{sup 3}. The design of the piston seal eliminates the need for a sample container, thus providing a larger sample volume and reduced absorption. The integrated electrical plug with a manganin pressure gauge offers an accurate measurement of pressure over the whole range of operational temperatures. The performance of the cell is demonstrated by an inelastic neutron scattering study of UGe{sub 2}.

  8. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  9. Improvement of surgical simulation using dynamic volume rendering.

    PubMed

    Radetzky, A; Schröcker, F; Auer, L M

    2000-01-01

    In the last years high efforts have been taken to develop surgical simulators for computer assisted training. However, most of these simulators use simple models of the human's anatomy, which are manually created using modeling software. Nevertheless, medical experts need to perform the training directly with the patient's complex anatomy, which can be received, for example, from digital imaging datasets (CT, MR). A common technique to display these datasets is volume rendering. However, even with high-end hardware only static models can be handled interactively. In surgical simulators a dynamic component is also needed because tissues must be deformed and partially removed. With the combination of springmass models, which are improved by neuro-fuzzy systems, and the recently developed OpenGL Volumizer, surgical simulation using real-time deformable (or dynamic) volume rendering became possible. As an application example the simulator ROBOSIM for minimally invasive neurosurgery is presented.

  10. Large- N volume independence in conformal and confining gauge theories

    NASA Astrophysics Data System (ADS)

    Ünsal, Mithat; Yaffe, Laurence G.

    2010-08-01

    Consequences of large N volume independence are examined in conformal and confining gauge theories. In the large N limit, gauge theories compactified on {mathbb{R}^{d - k}} × {left( {{S^1}} right)^k} are independent of the S 1 radii, provided the theory has unbroken center symmetry. In particular, this implies that a large N gauge theory which, on {mathbb{R}^d} , flowstoan IR fixed point, retains the infinite correlation length and other scale invariant properties of the decompactified theory even when compactified on {mathbb{R}^{d - k}} × {left( {{S^1}} right)^k} . In other words, finite volume effects are 1 /N suppressed. In lattice formulations of vector-like theories, this implies that numerical studies to determine the boundary between confined and conformal phases may be performed on one-site lattice models. In mathcal{N} = 4 supersymmetric Yang-Mills theory, the center symmetry realization is a matter of choice: the theory on {mathbb{R}^{4 - k}} × {left( {{S^1}} right)^k} has a moduli space which contains points with all possible realizations of center symmetry. Large N QCD with massive adjoint fermions and one or two compactified dimensions has a rich phase structure with an infinite number of phase transitions coalescing in the zero radius limit.

  11. Large Eddy Simulation of Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Wu, Ting; Cotton, William R.

    1999-01-01

    The Regional Atmospheric Modeling System (RAMS) with mesoscale interactive nested-grids and a Large-Eddy Simulation (LES) version of RAMS, coupled to two-moment microphysics and a new two-stream radiative code were used to investigate the dynamic, microphysical, and radiative aspects of the November 26, 1991 cirrus event. Wu (1998) describes the results of that research in full detail and is enclosed as Appendix 1. The mesoscale nested grid simulation successfully reproduced the large scale circulation as compared to the Mesoscale Analysis and Prediction System's (MAPS) analyses and other observations. Three cloud bands which match nicely to the three cloud lines identified in an observational study (Mace et al., 1995) are predicted on Grid #2 of the nested grids, even though the mesoscale simulation predicts a larger west-east cloud width than what was observed. Large-eddy simulations (LES) were performed to study the dynamical, microphysical, and radiative processes in the 26 November 1991 FIRE 11 cirrus event. The LES model is based on the RAMS version 3b developed at Colorado State University. It includes a new radiation scheme developed by Harrington (1997) and a new subgrid scale model developed by Kosovic (1996). The LES model simulated a single cloud layer for Case 1 and a two-layer cloud structure for Case 2. The simulations demonstrated that latent heat release can play a significant role in the formation and development of cirrus clouds. For the thin cirrus in Case 1, the latent heat release was insufficient for the cirrus clouds to become positively buoyant. However, in some special cases such as Case 2, positively buoyant cells can be embedded within the cirrus layers. These cells were so active that the rising updraft induced its own pressure perturbations that affected the cloud evolution. Vertical profiles of the total radiative and latent heating rates indicated that for well developed, deep, and active cirrus clouds, radiative cooling and latent

  12. Large-eddy simulations with wall models

    NASA Technical Reports Server (NTRS)

    Cabot, W.

    1995-01-01

    The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.

  13. The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data

    NASA Technical Reports Server (NTRS)

    Hanke, C. R.; Nordwall, D. R.

    1970-01-01

    The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.

  14. The Combination of Tissue Dissection and External Volume Expansion Generates Large Volumes of Adipose Tissue.

    PubMed

    He, Yunfan; Dong, Ziqing; Xie, Gan; Zhou, Tao; Lu, Feng

    2017-04-01

    Noninvasive external volume expansion device has been applied to stimulate nonsurgical breast enlargement in clinical settings. Although previous results demonstrate the capacity of external volume expansion to increase the number of adipocytes, this strategy alone is insufficient to reconstruct soft-tissue defects or increase breast mass. The authors combined a minimally invasive tissue dissection method with external volume expansion to generate large volumes of adipose tissue. In vitro, various densities of adipose-derived stem cells were prepared to evaluate relations between cell contacts and cell proliferation. In vivo, dorsal adipose tissue of rabbits was thoroughly dissected and the external volume expansion device was applied to maintain the released state. External volume expansion without tissue dissection served as the control. In the dissection group, the generated adipose tissue volume was much larger than that in the control group at all time points. A larger number of proliferating cells appeared in the dissection samples than in the control samples at the early stage after tissue dissection. At low cell density, adipose-derived stem cells displayed an increasing proliferation rate compared to high cell density. Protein expression analysis revealed that cell proliferation was mediated by a similar mechanism both in vivo and in vitro, involving the release of cell contact inhibition and Hippo/Yes-associated protein pathway activation. Adipose tissue dissection releases cell-to-cell contacts and induces adipose-derived stem cell proliferation. Preexpanded adipose-derived stem cells undergo adipogenesis under the adipogenic environment created by external volume expansion, leading to better adipose regeneration compared with the control.

  15. Large eddy simulation of cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, Aswin; Mahesh, Krishnan

    2014-11-01

    Large eddy simulation on unstructured grids is used to study hydrodynamic cavitation. The multiphase medium is represented using a homogeneous equilibrium model that assumes thermal equilibrium between the liquid and the vapor phase. Surface tension effects are ignored and the governing equations are the compressible Navier Stokes equations for the liquid/vapor mixture along with a transport equation for the vapor mass fraction. A characteristic-based filtering scheme is developed to handle shocks and material discontinuities in non-ideal gases and mixtures. A TVD filter is applied as a corrector step in a predictor-corrector approach with the predictor scheme being non-dissipative and symmetric. The method is validated for canonical one dimensional flows and leading edge cavitation over a hydrofoil, and applied to study sheet to cloud cavitation over a wedge. This work is supported by the Office of Naval Research.

  16. Autonomic Closure for Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    King, Ryan; Hamlington, Peter; Dahm, Werner J. A.

    2015-11-01

    A new autonomic subgrid-scale closure has been developed for large eddy simulation (LES). The approach poses a supervised learning problem that captures nonlinear, nonlocal, and nonequilibrium turbulence effects without specifying a predefined turbulence model. By solving a regularized optimization problem on test filter scale quantities, the autonomic approach identifies a nonparametric function that represents the best local relation between subgrid stresses and resolved state variables. The optimized function is then applied at the grid scale to determine unknown LES subgrid stresses by invoking scale similarity in the inertial range. A priori tests of the autonomic approach on homogeneous isotropic turbulence show that the new approach is amenable to powerful optimization and machine learning methods and is successful for a wide range of filter scales in the inertial range. In these a priori tests, the autonomic closure substantially improves upon the dynamic Smagorinsky model in capturing the instantaneous, statistical, and energy transfer properties of the subgrid stress field.

  17. Large eddy simulation applications in gas turbines.

    PubMed

    Menzies, Kevin

    2009-07-28

    The gas turbine presents significant challenges to any computational fluid dynamics techniques. The combination of a wide range of flow phenomena with complex geometry is difficult to model in the context of Reynolds-averaged Navier-Stokes (RANS) solvers. We review the potential for large eddy simulation (LES) in modelling the flow in the different components of the gas turbine during a practical engineering design cycle. We show that while LES has demonstrated considerable promise for reliable prediction of many flows in the engine that are difficult for RANS it is not a panacea and considerable application challenges remain. However, for many flows, especially those dominated by shear layer mixing such as in combustion chambers and exhausts, LES has demonstrated a clear superiority over RANS for moderately complex geometries although at significantly higher cost which will remain an issue in making the calculations relevant within the design cycle.

  18. Large eddy simulation of trailing edge noise

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Nitzkorski, Zane; Mahesh, Krishnan

    2015-11-01

    Noise generation is an important engineering constraint to many marine vehicles. A significant portion of the noise comes from propellers and rotors, specifically due to flow interactions at the trailing edge. Large eddy simulation is used to investigate the noise produced by a turbulent 45 degree beveled trailing edge and a NACA 0012 airfoil. A porous surface Ffowcs-Williams and Hawkings acoustic analogy is combined with a dynamic endcapping method to compute the sound. This methodology allows for the impact of incident flow noise versus the total noise to be assessed. LES results for the 45 degree beveled trailing edge are compared to experiment at M = 0 . 1 and Rec = 1 . 9 e 6 . The effect of boundary layer thickness on sound production is investigated by computing using both the experimental boundary layer thickness and a thinner boundary layer. Direct numerical simulation results of the NACA 0012 are compared to available data at M = 0 . 4 and Rec = 5 . 0 e 4 for both the hydrodynamic field and the acoustic field. Sound intensities and directivities are investigated and compared. Finally, some of the physical mechanisms of far-field noise generation, common to the two configurations, are discussed. Supported by Office of Naval research.

  19. Geometric Measures of Large Biomolecules: Surface, Volume and Pockets

    PubMed Central

    Mach, Paul; Koehl, Patrice

    2011-01-01

    Geometry plays a major role in our attempt to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent (n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 seconds with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. PMID:21823134

  20. Electrolyte and plasma enzyme analyses during large-volume liposuction.

    PubMed

    Lipschitz, Avron H; Kenkel, Jeffrey M; Luby, Maureen; Sorokin, Evan; Rohrich, Rod J; Brown, Spencer A

    2004-09-01

    Substantial fluid shifts occur during liposuction as wetting solution is infiltrated subcutaneously and fat is evacuated, causing potential electrolyte imbalances. In the porcine model for large-volume liposuction, plasma aspartate aminotransferase and alanine transaminase levels were elevated following liposuction. These results raised concerns for possible mechanical injury and/or lidocaine-induced hepatocellular toxicity in a clinical setting. The first objective of this human model study was to explore the effect of the liposuction procedure on electrolyte balance. The second objective was to determine whether elevated plasma aminotransferase levels were observed subsequent to large-volume liposuction. Five female volunteers underwent three-stage, ultrasound-assisted liposuction. Blood samples were collected perioperatively. Plasma levels of sodium, potassium, venous carbon dioxide, blood urea nitrogen, chloride, and creatinine were determined. Liver function analyte levels were measured, including albumin, total protein, aspartate aminotransferase, and alanine transaminase, alkaline phosphatase, gamma-glutamyl transpeptidase, and total bilirubin. To further define intracellular enzyme release, creatine kinase levels were measured. Mild hyponatremia was evident postoperatively (134 to 136 mmol/liter) in four patients. Hypokalemia was evident intraoperatively in all subjects (mean +/- SEM; 3.3 +/- 0.16 mmol/liter; range, 3.0 to 3.4 mmol/liter). Hypoalbuminemia and hypoproteinemia were observed throughout the study (baseline: 2.9 +/- 0.2 g/dl; range, 2.6 to 3.5 g/dl), decreasing to 10 to 40 percent 24 hours postoperatively (2.0 +/- 0.2 g/dl; range, 1.7 to 2.1 g/dl). Aspartate aminotransferase, alanine transaminase, and creatine kinase levels were significantly elevated after the procedure (190 +/- 47.1 U/liter, 50 +/- 7.7 U/liter, and 11,219 +/- 2556.7 U/liter, respectively) (p < 0.01). Release of antidiuretic hormone and even mildly hypotonic intravenous fluid

  1. Simulation of large acceptance LINAC for muons

    SciTech Connect

    Miyadera, H; Kurennoy, S; Jason, A J

    2010-01-01

    There has been a recent need for muon accelerators not only for future Neutrino Factories and Muon Colliders but also for other applications in industry and medical use. We carried out simulations on a large-acceptance muon linac with a new concept 'mixed buncher/acceleration'. The linac can accept pions/muons from a production target with large acceptance and accelerate muon without any beam cooling which makes the initial section of muon-linac system very compact. The linac has a high impact on Neutrino Factory and Muon Collider (NF/MC) scenario since the 300-m injector section can be replaced by the muon linac of only 10-m length. The current design of the linac consists of the following components: independent 805-MHz cavity structure with 6- or 8-cm-radius aperture window; injection of a broad range of pion/muon energies, 10-100 MeV, and acceleration to 150 - 200 MeV. Further acceleration of the muon beam are relatively easy since the beam is already bunched.

  2. Large-scale Intelligent Transporation Systems simulation

    SciTech Connect

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  3. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  4. Large eddy simulations of laminar separation bubble

    NASA Astrophysics Data System (ADS)

    Cadieux, Francois

    The flow over blades and airfoils at moderate angles of attack and Reynolds numbers ranging from ten thousand to a few hundred thousands undergoes separation due to the adverse pressure gradient generated by surface curvature. In many cases, the separated shear layer then transitions to turbulence and reattaches, closing off a recirculation region -- the laminar separation bubble. To avoid body-fitted mesh generation problems and numerical issues, an equivalent problem for flow over a flat plate is formulated by imposing boundary conditions that lead to a pressure distribution and Reynolds number that are similar to those on airfoils. Spalart & Strelet (2000) tested a number of Reynolds-averaged Navier-Stokes (RANS) turbulence models for a laminar separation bubble flow over a flat plate. Although results with the Spalart-Allmaras turbulence model were encouraging, none of the turbulence models tested reliably recovered time-averaged direct numerical simulation (DNS) results. The purpose of this work is to assess whether large eddy simulation (LES) can more accurately and reliably recover DNS results using drastically reduced resolution -- on the order of 1% of DNS resolution which is commonly achievable for LES of turbulent channel flows. LES of a laminar separation bubble flow over a flat plate are performed using a compressible sixth-order finite-difference code and two incompressible pseudo-spectral Navier-Stokes solvers at resolutions corresponding to approximately 3% and 1% of the chosen DNS benchmark by Spalart & Strelet (2000). The finite-difference solver is found to be dissipative due to the use of a stability-enhancing filter. Its numerical dissipation is quantified and found to be comparable to the average eddy viscosity of the dynamic Smagorinsky model, making it difficult to separate the effects of filtering versus those of explicit subgrid-scale modeling. The negligible numerical dissipation of the pseudo-spectral solvers allows an unambiguous

  5. Effect of large volume paracentesis on plasma volume--a cause of hypovolemia

    SciTech Connect

    Kao, H.W.; Rakov, N.E.; Savage, E.; Reynolds, T.B.

    1985-05-01

    Large volume paracentesis, while effectively relieving symptoms in patients with tense ascites, has been generally avoided due to reports of complications attributed to an acute reduction in intravascular volume. Measurements of plasma volume in these subjects have been by indirect methods and have not uniformly confirmed hypovolemia. We have prospectively evaluated 18 patients (20 paracenteses) with tense ascites and peripheral edema due to chronic liver disease undergoing 5 liter paracentesis for relief of symptoms. Plasma volume pre- and postparacentesis was assessed by a /sup 125/I-labeled human serum albumin dilution technique as well as by the change in hematocrit and postural blood pressure difference. No significant change in serum sodium, urea nitrogen, hematocrit or postural systolic blood pressure difference was noted at 24 or 48 hr after paracentesis. Serum creatinine at 24 hr after paracentesis was unchanged but a small but statistically significant increase in serum creatinine was noted at 48 hr postparacentesis. Plasma volume changed -2.7% (n = 6, not statistically significant) during the first 24 hr and -2.8% (n = 12, not statistically significant) during the 0- to 48-hr period. No complications from paracentesis were noted. These results suggest that 5 liter paracentesis for relief of symptoms is safe in patients with tense ascites and peripheral edema from chronic liver disease.

  6. Flight Simulation Model Exchange. Volume 2; Appendices

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the appendices to the main report.

  7. Flight Simulation Model Exchange. Volume 1

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce

    2011-01-01

    The NASA Engineering and Safety Center Review Board sponsored an assessment of the draft Standard, Flight Dynamics Model Exchange Standard, BSR/ANSI-S-119-201x (S-119) that was conducted by simulation and guidance, navigation, and control engineers from several NASA Centers. The assessment team reviewed the conventions and formats spelled out in the draft Standard and the actual implementation of two example aerodynamic models (a subsonic F-16 and the HL-20 lifting body) encoded in the Extensible Markup Language grammar. During the implementation, the team kept records of lessons learned and provided feedback to the American Institute of Aeronautics and Astronautics Modeling and Simulation Technical Committee representative. This document contains the results of the assessment.

  8. Volumetric leak detection in large underground storage tanks. Volume 1

    SciTech Connect

    Starr, J.W.; Wise, R.F.; Maresca, J.W.

    1991-08-01

    A set of experiments was conducted to determine whether volumetric leak detection system presently used to test underground storage tanks (USTs) up to 38,000 L (10,000 gal) in capacity could meet EPA's regulatory standards for tank tightness and automatic tank gauging systems when used to test tanks up to 190,000 L (50,000 gal) in capacity. The experiments, conducted on two partially filled 190,000-L (50,000-gal) USTs at Griffiss Air Force Base in upstate New York during late August 1990, showed that a system's performance in large tanks depends primarily on the accuracy of the temperature compensation, which is inversely proportional to the volume of product in the tank. Errors in temperature compensation that were negligible in tests in small tanks were important in large tanks. The experiments further suggest that a multiple-test strategy is also required.

  9. Large Eddy Simulation of Powered Fontan Hemodynamics

    PubMed Central

    Delorme, Y.; Anupindi, K.; Kerlo, A.E.; Shetty, D.; Rodefeld, M.; Chen, J.; Frankel, S.

    2012-01-01

    Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2–3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3–5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a “biventricular Fontan” circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo™) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data. PMID:23177085

  10. Large eddy simulation of powered Fontan hemodynamics.

    PubMed

    Delorme, Y; Anupindi, K; Kerlo, A E; Shetty, D; Rodefeld, M; Chen, J; Frankel, S

    2013-01-18

    Children born with univentricular heart disease typically must undergo three open heart surgeries within the first 2-3 years of life to eventually establish the Fontan circulation. In that case the single working ventricle pumps oxygenated blood to the body and blood returns to the lungs flowing passively through the Total Cavopulmonary Connection (TCPC) rather than being actively pumped by a subpulmonary ventricle. The TCPC is a direct surgical connection between the superior and inferior vena cava and the left and right pulmonary arteries. We have postulated that a mechanical pump inserted into this circulation providing a 3-5 mmHg pressure augmentation will reestablish bi-ventricular physiology serving as a bridge-to-recovery, bridge-to-transplant or destination therapy as a "biventricular Fontan" circulation. The Viscous Impeller Pump (VIP) has been proposed by our group as such an assist device. It is situated in the center of the 4-way TCPC intersection and spins pulling blood from the vena cavae and pushing it into the pulmonary arteries. We hypothesized that Large Eddy Simulation (LES) using high-order numerical methods are needed to capture unsteady powered and unpowered Fontan hemodynamics. Inclusion of a mechanical pump into the CFD further complicates matters due to the need to account for rotating machinery. In this study, we focus on predictions from an in-house high-order LES code (WenoHemo(TM)) for unpowered and VIP-powered idealized TCPC hemodynamics with quantitative comparisons to Stereoscopic Particle Imaging Velocimetry (SPIV) measurements. Results are presented for both instantaneous flow structures and statistical data. Simulations show good qualitative and quantitative agreement with measured data.

  11. Large scale simulations of Brownian suspensions

    NASA Astrophysics Data System (ADS)

    Viera, Marc Nathaniel

    Particle suspensions occur in a wide variety of natural and engineering materials. Some examples are colloids, polymers, paints, and slurries. These materials exhibit complex behavior owing to the forces which act among the particles and are transmitted through the fluid medium. Depending on the application, particle sizes range from large macroscopic molecules of 100mum to smaller colloidal particles in the range of 10nm to 1mum. Particles of this size interact though interparticle forces such as electrostatic and van der Waals, as well as hydrodynamic forces transmitted through the fluid medium. Additionally, the particles are subjected to random thermal fluctuations in the fluid giving rise to Brownian motion. The central objective of our research is to develop efficient numerical algorithms for the large scale dynamic simulation of particle suspensions. While previous methods have incurred a computational cost of O(N3), where N is the number of particles, we have developed a novel algorithm capable of solving this problem in O(N ln N) operations. This has allowed us to perform dynamic simulations with up to 64,000 particles and Monte Carlo realizations of up to 1 million particles. Our algorithm follows a Stokesian dynamics formulation by evaluating many-body hydrodynamic interactions using a far-field multipole expansion combined with a near-field lubrication correction. The breakthrough O(N ln N) scaling is obtained by employing a Particle-Mesh-Ewald (PME) approach whereby near-field interactions are evaluated directly and far-field interactions are evaluated using a grid based velocity computed with FFT's. This approach is readily extended to include the effects of Brownian motion. For interacting particles, the fluctuation-dissipation theorem requires that the individual Brownian forces satisfy a correlation based on the N body resistance tensor R. The accurate modeling of these forces requires the computation of a matrix square root R 1/2 for matrices up

  12. Analysis of errors occurring in large eddy simulation.

    PubMed

    Geurts, Bernard J

    2009-07-28

    We analyse the effect of second- and fourth-order accurate central finite-volume discretizations on the outcome of large eddy simulations of homogeneous, isotropic, decaying turbulence at an initial Taylor-Reynolds number Re(lambda)=100. We determine the implicit filter that is induced by the spatial discretization and show that a higher order discretization also induces a higher order filter, i.e. a low-pass filter that keeps a wider range of flow scales virtually unchanged. The effectiveness of the implicit filtering is correlated with the optimal refinement strategy as observed in an error-landscape analysis based on Smagorinsky's subfilter model. As a point of reference, a finite-volume method that is second-order accurate for both the convective and the viscous fluxes in the Navier-Stokes equations is used. We observe that changing to a fourth-order accurate convective discretization leads to a higher value of the Smagorinsky coefficient C(S) required to achieve minimal total error at given resolution. Conversely, changing only the viscous flux discretization to fourth-order accuracy implies that optimal simulation results are obtained at lower values of C(S). Finally, a fully fourth-order discretization yields an optimal C(S) that is slightly lower than the reference fully second-order method.

  13. Improved HF Data Network Simulator. Volume 1

    DTIC Science & Technology

    1993-07-01

    flares - may cause HF blackouts, as can large terrestrial events such as volcanic eruptions and atomic explosions. The ionosphere exhibits a remarkable...of the earth interacts with the solar wind, causing rapid changes in the ionosphere that are made visible in part by the aurora borealis. The effects...backscatter - unpredictable changes in refraction from sporadic-E and F layers - excess path delays caused by non-great-circle modes propagating via

  14. Ultra-rapid formation of large volumes of evolved magma

    NASA Astrophysics Data System (ADS)

    Michaut, C.; Jaupart, C.

    2006-10-01

    We discuss evidence for, and evaluate the consequences of, the growth of magma reservoirs by small increments of thin (⋍ 1-2 m) sills. For such thin units, cooling proceeds faster than the nucleation and growth of crystals, which only allows a small amount of crystallization and leads to the formation of large quantities of glass. The heat balance equation for kinetic-controlled crystallization is solved numerically for a range of sill thicknesses, magma injection rates and crustal emplacement depths. Successive injections lead to the accumulation of poorly crystallized chilled magma with the properties of a solid. Temperatures increase gradually with each injection until they become large enough to allow a late phase of crystal nucleation and growth. Crystallization and latent heat release work in a positive feedback loop, leading to catastrophic heating of the magma pile, typically by 200 °C in a few decades. Large volumes of evolved melt are made available in a short time. The time for the catastrophic heating event varies as Q- 2 , where Q is the average magma injection rate, and takes values in a range of 10 5-10 6 yr for typical geological magma production rates. With this mechanism, storage of large quantities of magma beneath an active volcanic center may escape detection by seismic methods.

  15. Cardiovascular simulator improvement: pressure versus volume loop assessment.

    PubMed

    Fonseca, Jeison; Andrade, Aron; Nicolosi, Denys E C; Biscegli, José F; Leme, Juliana; Legendre, Daniel; Bock, Eduardo; Lucchi, Julio Cesar

    2011-05-01

    This article presents improvement on a physical cardiovascular simulator (PCS) system. Intraventricular pressure versus intraventricular volume (PxV) loop was obtained to evaluate performance of a pulsatile chamber mimicking the human left ventricle. PxV loop shows heart contractility and is normally used to evaluate heart performance. In many heart diseases, the stroke volume decreases because of low heart contractility. This pathological situation must be simulated by the PCS in order to evaluate the assistance provided by a ventricular assist device (VAD). The PCS system is automatically controlled by a computer and is an auxiliary tool for VAD control strategies development. This PCS system is according to a Windkessel model where lumped parameters are used for cardiovascular system analysis. Peripheral resistance, arteries compliance, and fluid inertance are simulated. The simulator has an actuator with a roller screw and brushless direct current motor, and the stroke volume is regulated by the actuator displacement. Internal pressure and volume measurements are monitored to obtain the PxV loop. Left chamber internal pressure is directly obtained by pressure transducer; however, internal volume has been obtained indirectly by using a linear variable differential transformer, which senses the diaphragm displacement. Correlations between the internal volume and diaphragm position are made. LabVIEW integrates these signals and shows the pressure versus internal volume loop. The results that have been obtained from the PCS system show PxV loops at different ventricle elastances, making possible the simulation of pathological situations. A preliminary test with a pulsatile VAD attached to PCS system was made.

  16. Large Eddy Simulation of Transitional Boundary Layer

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Moin, Parviz

    2009-11-01

    A sixth order compact finite difference code is employed to investigate compressible Large Eddy Simulation (LES) of subharmonic transition of a spatially developing zero pressure gradient boundary layer, at Ma = 0.2. The computational domain extends from Rex= 10^5, where laminar blowing and suction excites the most unstable fundamental and sub-harmonic modes, to fully turbulent stage at Rex= 10.1x10^5. Numerical sponges are used in the neighborhood of external boundaries to provide non-reflective conditions. Our interest lies in the performance of the dynamic subgrid scale (SGS) model [1] in the transition process. It is observed that in early stages of transition the eddy viscosity is much smaller than the physical viscosity. As a result the amplitudes of selected harmonics are in very good agreement with the experimental data [2]. The model's contribution gradually increases during the last stages of transition process and the dynamic eddy viscosity becomes fully active and dominant in the turbulent region. Consistent with this trend the skin friction coefficient versus Rex diverges from its laminar profile and converges to the turbulent profile after an overshoot. 1. Moin P. et. al. Phys Fluids A, 3(11), 2746-2757, 1991. 2. Kachanov Yu. S. et. al. JFM, 138, 209-247, 1983.

  17. Turbulence topologies predicted using large eddy simulations

    NASA Astrophysics Data System (ADS)

    Wang, Bing-Chen; Bergstrom, Donald J.; Yin, Jing; Yee, Eugene

    In this paper, turbulence topologies related to the invariants of the resolved velocity gradient and strain rate tensors are studied based on large eddy simulation. The numerical results presented in the paper were obtained using two dynamic models, namely, the conventional dynamic model of Lilly and a recently developed dynamic nonlinear subgrid scale (SGS) model. In contrast to most of the previous research investigations which have mainly focused on isotropic turbulence, the present study examines the influence of near-wall anisotropy on the flow topologies. The SGS effect on the so-called SGS dissipation of the discriminant is examined and it is shown that the SGS stress contributes to the deviation of the flow topology of real turbulence from that of the ideal restricted Euler flow. The turbulence kinetic energy (TKE) transfer between the resolved and subgrid scales of motion is studied, and the forward and backward scatters of TKE are quantified in the invariant phase plane. Some interesting phenomenological results have also been obtained, including a wing-shaped contour pattern for the density of the resolved enstrophy generation and the near-wall dissipation shift of the peak location (mode) in the joint probability density function of the invariants of the resolved strain rate tensor. The newly observed turbulence phenomenologies are believed to be important and an effort has been made to explain them on an analytical basis.

  18. Large-Eddy Simulation of Subsonic Jets

    NASA Astrophysics Data System (ADS)

    Vuorinen, Ville; Wehrfritz, Armin; Yu, Jingzhou; Kaario, Ossi; Larmi, Martti; Boersma, Bendiks Jan

    2011-12-01

    The present study deals with development and validation of a fully explicit, compressible Runge-Kutta-4 (RK4) Navier-Stokes solver in the opensource CFD programming environment OpenFOAM. The background motivation is to shift towards explicit density based solution strategy and thereby avoid using the pressure based algorithms which are currently proposed in the standard OpenFOAM release for Large-Eddy Simulation (LES). This shift is considered necessary in strongly compressible flows when Ma > 0.5. Our application of interest is related to the pre-mixing stage in direct injection gas engines where high injection pressures are typically utilized. First, the developed flow solver is discussed and validated. Then, the implementation of subsonic inflow conditions using a forcing region in combination with a simplified nozzle geometry is discussed and validated. After this, LES of mixing in compressible, round jets at Ma = 0.3, 0.5 and 0.65 are carried out. Respectively, the Reynolds numbers of the jets correspond to Re = 6000, 10000 and 13000. Results for two meshes are presented. The results imply that the present solver produces turbulent structures, resolves a range of turbulent eddy frequencies and gives also mesh independent results within satisfactory limits for mean flow and turbulence statistics.

  19. REXOR Rotorcraft Simulation Model. Volume 2. Computer Implementation

    DTIC Science & Technology

    1976-07-01

    in Black 20, It JIIIorent frma ReP.M.) IS. SUPPLEMNY NOTES Volume 11 of three volumes. St. KCEY *ONO$ (CoAlriao on 10,090. cWd. ’lA#;*cy 4-d Id0nIlY...nonlinear simulation called RE(OR, and is divided into three volumnes. The first volume is a development of rotorcraft mechanics and aerodynamics. 11e...computation nucleus of REXOR. ACCEL gathers the information to form the generalized mass and force matricies , and controls the accelera- tion update

  20. SUSY’s Ladder: Reframing sequestering at Large Volume

    SciTech Connect

    Reece, Matthew; Xue, Wei

    2016-04-07

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague other supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.

  1. SUSY’s Ladder: Reframing sequestering at Large Volume

    DOE PAGES

    Reece, Matthew; Xue, Wei

    2016-04-07

    Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague othermore » supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.« less

  2. High density three-dimensional localization microscopy across large volumes

    PubMed Central

    Legant, Wesley R.; Shao, Lin; Grimm, Jonathan B.; Brown, Timothy A.; Milkie, Daniel E.; Avants, Brian B.; Lavis, Luke D.; Betzig, Eric

    2016-01-01

    Extending three-dimensional (3D) single molecule localization microscopy away from the coverslip and into thicker specimens will greatly broaden its biological utility. However, localizing molecules in 3D with high precision in such samples, while simultaneously achieving the extreme labeling densities required for high resolution of densely crowded structures is challenging due to the limitations both of conventional imaging modalities and of conventional labeling techniques. Here, we combine lattice light sheet microscopy with newly developed, freely diffusing, cell permeable chemical probes with targeted affinity towards either DNA, intracellular membranes, or the plasma membrane. We use this combination to perform high localization precision, ultra-high labeling density, multicolor localization microscopy in samples up to 20 microns thick, including dividing cells and the neuromast organ of a zebrafish embryo. We also demonstrate super-resolution correlative imaging with protein specific photoactivable fluorophores, providing a mutually compatible, single platform alternative to correlative light-electron microscopy over large volumes. PMID:26950745

  3. Large volume water sprays for dispersing warm fogs

    NASA Astrophysics Data System (ADS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.

    A new method for dispersing of warm fogs which impede visibility and alter schedules is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray-induced air flow; the fog droplets are removed by coalescence/rainout. The efficiency of this fog droplet removal process depends on the size spectra of the spray drops and optimum spray drop size is calculated as between 0.3-1.0 mm in diameter. Water spray tests were conducted in order to determine the drop size spectra and temperature response of sprays produced by commercially available fire-fighting nozzles, and nozzle array tests were utilized to study air flow patterns and the thermal properties of the overall system. The initial test data reveal that the fog-dispersal procedure is effective.

  4. Large space telescope, phase A. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Phase A study of the Large Space Telescope (LST) is reported. The study defines an LST concept based on the broad mission guidelines provided by the Office of Space Science (OSS), the scientific requirements developed by OSS with the scientific community, and an understanding of long range NASA planning current at the time the study was performed. The LST is an unmanned astronomical observatory facility, consisting of an optical telescope assembly (OTA), scientific instrument package (SIP), and a support systems module (SSM). The report consists of five volumes. The report describes the constraints and trade off analyses that were performed to arrive at a reference design for each system and for the overall LST configuration. A low cost design approach was followed in the Phase A study. This resulted in the use of standard spacecraft hardware, the provision for maintenance at the black box level, growth potential in systems designs, and the sharing of shuttle maintenance flights with other payloads.

  5. Large volume water sprays for dispersing warm fogs

    NASA Technical Reports Server (NTRS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.

    1986-01-01

    A new method for dispersing of warm fogs which impede visibility and alter schedules is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray-induced air flow; the fog droplets are removed by coalescence/rainout. The efficiency of this fog droplet removal process depends on the size spectra of the spray drops and optimum spray drop size is calculated as between 0.3-1.0 mm in diameter. Water spray tests were conducted in order to determine the drop size spectra and temperature response of sprays produced by commercially available fire-fighting nozzles, and nozzle array tests were utilized to study air flow patterns and the thermal properties of the overall system. The initial test data reveal that the fog-dispersal procedure is effective.

  6. Multisystem organ failure after large volume injection of castor oil.

    PubMed

    Smith, Silas W; Graber, Nathan M; Johnson, Rudolph C; Barr, John R; Hoffman, Robert S; Nelson, Lewis S

    2009-01-01

    We report a case of multisystem organ failure after large volume subcutaneous injection of castor oil for cosmetic enhancement. An unlicensed practitioner injected 500 mL of castor oil bilaterally to the hips and buttocks of a 28-year-old male to female transsexual. Immediate local pain and erythema were followed by abdominal and chest pain, emesis, headache, hematuria, jaundice, and tinnitus. She presented to an emergency department 12 hours postinjection. Persistently hemolyzed blood samples complicated preliminary laboratory analysis. She rapidly deteriorated despite treatment and developed fever, tachycardia, hemolysis, thrombocytopenia, hepatitis, respiratory distress, and anuric renal failure. An infectious diseases evaluation was negative. After intensive supportive care, including mechanical ventilation and hemodialysis, she was discharged 11 days later, requiring dialysis for an additional 1.5 months. Castor oil absorption was inferred from recovery of the Ricinus communis biomarker, ricinine, in the patient's urine (41 ng/mL). Clinicians should anticipate multiple complications after unapproved methods of cosmetic enhancement.

  7. Striped Bass, morone saxatilis, egg incubation in large volume jars

    USGS Publications Warehouse

    Harper, C.J.; Wrege, B.M.; Jeffery, Isely J.

    2010-01-01

    The standard McDonald jar was compared with a large volume jar for striped bass, Morone saxatilis, egg incubation. The McDonald jar measured 16 cm in diameter by 45 cm in height and had a volume of 6 L. The experimental jar measured 0.4 m in diameter by 1.3 m in height and had a volume of 200 L. The hypothesis is that there is no difference in percent survival of fry hatched in experimental jars compared with McDonald jars. Striped bass brood fish were collected from the Coosa River and spawned using the dry spawn method of fertilization. Four McDonald jars were stocked with approximately 150 g of eggs each. Post-hatch survival was estimated at 48, 96, and 144 h. Stocking rates resulted in an average egg loading rate (??1 SE) in McDonald jars of 21.9 ?? 0.03 eggs/mL and in experimental jars of 10.9 ?? 0.57 eggs/mL. The major finding of this study was that average fry survival was 37.3 ?? 4.49% for McDonald jars and 34.2 ?? 3.80% for experimental jars. Although survival in experimental jars was slightly less than in McDonald jars, the effect of container volume on survival to 48 h (F = 6.57; df = 1,5; P > 0.05), 96 h (F = 0.02; df = 1, 4; P > 0.89), and 144 h (F = 3.50; df = 1, 4; P > 0.13) was not statistically significant. Mean survival between replicates ranged from 14.7 to 60.1% in McDonald jars and from 10.1 to 54.4% in experimental jars. No effect of initial stocking rate on survival (t = 0.06; df = 10; P > 0.95) was detected. Experimental jars allowed for incubation of a greater number of eggs in less than half the floor space of McDonald jars. As hatchery production is often limited by space or water supply, experimental jars offer an alternative to extending spawning activities, thereby reducing labor and operations cost. As survival was similar to McDonald jars, the experimental jar is suitable for striped bass egg incubation. ?? Copyright by the World Aquaculture Society 2010.

  8. Monte Carlo simulation of large electron fields

    PubMed Central

    Faddegon, Bruce A; Perl, Joseph; Asai, Makoto

    2010-01-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different “physics lists,” were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the buildup region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy. PMID:18296775

  9. Simulating Operation of a Large Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Frederick, Dean K.; DeCastro, Jonathan

    2008-01-01

    The Commercial Modular Aero- Propulsion System Simulation (C-MAPSS) is a computer program for simulating transient operation of a commercial turbofan engine that can generate as much as 90,000 lb (.0.4 MN) of thrust. It includes a power-management system that enables simulation of open- or closed-loop engine operation over a wide range of thrust levels throughout the full range of flight conditions. C-MAPSS provides the user with a set of tools for performing open- and closed-loop transient simulations and comparison of linear and non-linear models throughout its operating envelope, in an easy-to-use graphical environment.

  10. Large eddy simulation of soot evolution in an aircraft combustor

    NASA Astrophysics Data System (ADS)

    Mueller, Michael E.; Pitsch, Heinz

    2013-11-01

    An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel

  11. Volume visualization of multiple alignment of large genomicDNA

    SciTech Connect

    Shah, Nameeta; Dillard, Scott E.; Weber, Gunther H.; Hamann, Bernd

    2005-07-25

    Genomes of hundreds of species have been sequenced to date, and many more are being sequenced. As more and more sequence data sets become available, and as the challenge of comparing these massive ''billion basepair DNA sequences'' becomes substantial, so does the need for more powerful tools supporting the exploration of these data sets. Similarity score data used to compare aligned DNA sequences is inherently one-dimensional. One-dimensional (1D) representations of these data sets do not effectively utilize screen real estate. As a result, tools using 1D representations are incapable of providing informatory overview for extremely large data sets. We present a technique to arrange 1D data in 3D space to allow us to apply state-of-the-art interactive volume visualization techniques for data exploration. We demonstrate our technique using multi-millions-basepair-long aligned DNA sequence data and compare it with traditional 1D line plots. The results show that our technique is superior in providing an overview of entire data sets. Our technique, coupled with 1D line plots, results in effective multi-resolution visualization of very large aligned sequence data sets.

  12. Computer simulation of preflight blood volume reduction as a countermeasure to fluid shifts in space flight

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Srinivasan, R.; Charles, J. B.

    1992-01-01

    Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.

  13. Simulation of large systems with neural networks

    SciTech Connect

    Paez, T.L.

    1994-09-01

    Artificial neural networks (ANNs) have been shown capable of simulating the behavior of complex, nonlinear, systems, including structural systems. Under certain circumstances, it is desirable to simulate structures that are analyzed with the finite element method. For example, when we perform a probabilistic analysis with the Monte Carlo method, we usually perform numerous (hundreds or thousands of) repetitions of a response simulation with different input and system parameters to estimate the chance of specific response behaviors. In such applications, efficiency in computation of response is critical, and response simulation with ANNs can be valuable. However, finite element analyses of complex systems involve the use of models with tens or hundreds of thousands of degrees of freedom, and ANNs are practically limited to simulations that involve far fewer variables. This paper develops a technique for reducing the amount of information required to characterize the response of a general structure. We show how the reduced information can be used to train a recurrent ANN. Then the trained ANN can be used to simulate the reduced behavior of the original system, and the reduction transformation can be inverted to provide a simulation of the original system. A numerical example is presented.

  14. Monte Carlo Simulations for Dosimetry in Prostate Radiotherapy with Different Intravesical Volumes and Planning Target Volume Margins

    PubMed Central

    Lv, Wei; Yu, Dong; He, Hengda; Liu, Qian

    2016-01-01

    In prostate radiotherapy, the influence of bladder volume variation on the dose absorbed by the target volume and organs at risk is significant and difficult to predict. In addition, the resolution of a typical medical image is insufficient for visualizing the bladder wall, which makes it more difficult to precisely evaluate the dose to the bladder wall. This simulation study aimed to quantitatively investigate the relationship between the dose received by organs at risk and the intravesical volume in prostate radiotherapy. The high-resolution Visible Chinese Human phantom and the finite element method were used to construct 10 pelvic models with specific intravesical volumes ranging from 100 ml to 700 ml to represent bladders of patients with different bladder filling capacities during radiotherapy. This series of models was utilized in six-field coplanar 3D conformal radiotherapy simulations with different planning target volume (PTV) margins. Each organ’s absorbed dose was calculated using the Monte Carlo method. The obtained bladder wall displacements during bladder filling were consistent with reported clinical measurements. The radiotherapy simulation revealed a linear relationship between the dose to non-targeted organs and the intravesical volume and indicated that a 10-mm PTV margin for a large bladder and a 5-mm PTV margin for a small bladder reduce the effective dose to the bladder wall to similar degrees. However, larger bladders were associated with evident protection of the intestines. Detailed dosimetry results can be used by radiation oncologists to create more accurate, individual water preload protocols according to the patient’s anatomy and bladder capacity. PMID:27441944

  15. Stochastic Large Eddy Simulation of Geostrophic Turbulence

    NASA Astrophysics Data System (ADS)

    Nadiga, B.; Livescu, D.; McKay, C. Q.

    2005-05-01

    Results are presented of (fine-scale) eddy-resolving simulations of different instances of turbulent quasi-geostrophic ocean circulation. A stochastic model for the effects of neglected subgrid degrees-of-freedom in coarse-scale simulations is proposed and the results compared to the fine simulations results as well as with existing models. As a precursor to the introduction of the models, we also study various aspects of the nonlinear rectification of stochastic forcing in quasi-geostrophic models of ocean circulation.

  16. Parallel runway requirement analysis study. Volume 2: Simulation manual

    NASA Technical Reports Server (NTRS)

    Ebrahimi, Yaghoob S.; Chun, Ken S.

    1993-01-01

    This document is a user manual for operating the PLAND_BLUNDER (PLB) simulation program. This simulation is based on two aircraft approaching parallel runways independently and using parallel Instrument Landing System (ILS) equipment during Instrument Meteorological Conditions (IMC). If an aircraft should deviate from its assigned localizer course toward the opposite runway, this constitutes a blunder which could endanger the aircraft on the adjacent path. The worst case scenario would be if the blundering aircraft were unable to recover and continue toward the adjacent runway. PLAND_BLUNDER is a Monte Carlo-type simulation which employs the events and aircraft positioning during such a blunder situation. The model simulates two aircraft performing parallel ILS approaches using Instrument Flight Rules (IFR) or visual procedures. PLB uses a simple movement model and control law in three dimensions (X, Y, Z). The parameters of the simulation inputs and outputs are defined in this document along with a sample of the statistical analysis. This document is the second volume of a two volume set. Volume 1 is a description of the application of the PLB to the analysis of close parallel runway operations.

  17. Description and characterization of a novel method for partial volume simulation in software breast phantoms.

    PubMed

    Chen, Feiyu; Bakic, Predrag R; Maidment, Andrew D A; Jensen, Shane T; Shi, Xiquan; Pokrajac, David D

    2015-10-01

    A modification to our previous simulation of breast anatomy is proposed to improve the quality of simulated x-ray projections images. The image quality is affected by the voxel size of the simulation. Large voxels can cause notable spatial quantization artifacts; small voxels extend the generation time and increase the memory requirements. An improvement in image quality is achievable without reducing voxel size by the simulation of partial volume averaging in which voxels containing more than one simulated tissue type are allowed. The linear x-ray attenuation coefficient of voxels is, thus, the sum of the linear attenuation coefficients weighted by the voxel subvolume occupied by each tissue type. A local planar approximation of the boundary surface is employed. In the two-material case, the partial volume in each voxel is computed by decomposition into up to four simple geometric shapes. In the three-material case, by application of the Gauss-Ostrogradsky theorem, the 3D partial volume problem is converted into one of a few simpler 2D surface area problems. We illustrate the benefits of the proposed methodology on simulated x-ray projections. An efficient encoding scheme is proposed for the type and proportion of simulated tissues in each voxel. Monte Carlo simulation was used to evaluate the quantitative error of our approximation algorithms.

  18. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    SciTech Connect

    Ebrahimi, F.; Raman, R.

    2016-03-23

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. Furthermore, these results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  19. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE Data Explorer

    Ebrahimi, F. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Raman, R. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-04-01

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet–Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. These results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  20. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE Data Explorer

    Ebrahimi, Fatima [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000331095367); Raman, Roger [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000220273271)

    2016-01-01

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form a narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet–Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. These results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.

  1. Large-volume flux closure during plasmoid-mediated reconnection in coaxial helicity injection

    DOE PAGES

    Ebrahimi, F.; Raman, R.

    2016-03-23

    A large-volume flux closure during transient coaxial helicity injection (CHI) in NSTX-U is demonstrated through resistive magnetohydrodynamics (MHD) simulations. Several major improvements, including the improved positioning of the divertor poloidal field coils, are projected to improve the CHI start-up phase in NSTX-U. Simulations in the NSTX-U configuration with constant in time coil currents show that with strong flux shaping the injected open field lines (injector flux) rapidly reconnect and form large volume of closed flux surfaces. This is achieved by driving parallel current in the injector flux coil and oppositely directed currents in the flux shaping coils to form amore » narrow injector flux footprint and push the injector flux into the vessel. As the helicity and plasma are injected into the device, the oppositely directed field lines in the injector region are forced to reconnect through a local Sweet-Parker type reconnection, or to spontaneously reconnect when the elongated current sheet becomes MHD unstable to form plasmoids. In these simulations for the first time, it is found that the closed flux is over 70% of the initial injector flux used to initiate the discharge. Furthermore, these results could work well for the application of transient CHI in devices that employ super conducting coils to generate and sustain the plasma equilibrium.« less

  2. Large-scale mass distribution in the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Haider, M.; Steinhauser, D.; Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Hernquist, L.

    2016-04-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 per cent of the dark matter and 23 per cent of the baryons are within haloes more massive than the resolution limit of 2 × 108 M⊙. The filaments of the cosmic web host a further 45 per cent of the dark matter and 46 per cent of the baryons. The remaining 31 per cent of the baryons reside in voids. The majority of these baryons have been transported there through active galactic nuclei feedback. We note that the feedback model of Illustris is too strong for heavy haloes, therefore it is likely that we are overestimating this amount. Categorizing the baryons according to their density and temperature, we find that 17.8 per cent of them are in a condensed state, 21.6 per cent are present as cold, diffuse gas, and 53.9 per cent are found in the state of a warm-hot intergalactic medium.

  3. Testing large volume water treatment and crude oil ...

    EPA Pesticide Factsheets

    Report EPA’s Homeland Security Research Program (HSRP) partnered with the Idaho National Laboratory (INL) to build the Water Security Test Bed (WSTB) at the INL test site outside of Idaho Falls, Idaho. The WSTB was built using an 8-inch (20 cm) diameter cement-mortar lined drinking water pipe that was previously taken out of service. The pipe was exhumed from the INL grounds and oriented in the shape of a small drinking water distribution system. Effluent from the pipe is captured in a lagoon. The WSTB can support drinking water distribution system research on a variety of drinking water treatment topics including biofilms, water quality, sensors, and homeland security related contaminants. Because the WSTB is constructed of real drinking water distribution system pipes, research can be conducted under conditions similar to those in a real drinking water system. In 2014, WSTB pipe was experimentally contaminated with Bacillus globigii spores, a non-pathogenic surrogate for the pathogenic B. anthracis, and then decontaminated using chlorine dioxide. In 2015, the WSTB was used to perform the following experiments: • Four mobile disinfection technologies were tested for their ability to disinfect large volumes of biologically contaminated “dirty” water from the WSTB. B. globigii spores acted as the biological contaminant. The four technologies evaluated included: (1) Hayward Saline C™ 6.0 Chlorination System, (2) Advanced Oxidation Process (A

  4. An innovative piston corer for large-volume sediment samples.

    PubMed

    Gallmetzer, Ivo; Haselmair, Alexandra; Stachowitsch, Michael; Zuschin, Martin

    2016-11-01

    Coring is one of several standard procedures to extract sediments and their faunas from open marine, estuarine, and limnic environments. Achieving sufficiently deep penetration, obtaining large sediment volumes in single deployments, and avoiding sediment loss upon retrieval remain problematic. We developed a piston corer with a diameter of 16 cm that enables penetration down to 1.5 m in a broad range of soft bottom types, yields sufficient material for multiple analyses, and prevents sediment loss due to a specially designed hydraulic core catcher. A novel extrusion system enables very precise slicing and preserves the original sediment stratification by keeping the liners upright. The corer has moderate purchase costs and a robust and simple design that allows for a deployment from relatively small vessels as available at most marine science institutions. It can easily be operated by two to three researchers rather than by specially trained technicians. In the northern Adriatic Sea, the corer successfully extracted more than 50 cores from a range of fine mud to coarse sand, at water depths from three to 45 m. The initial evaluation of the cores demonstrated their usefulness for fauna sequences along with heavy metal, nutrient and pollutant analyses. Their length is particularly suited for historical ecological work requiring sedimentary and faunal sequences to reconstruct benthic communities over the last millennia.

  5. Study of Hydrokinetic Turbine Arrays with Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Sale, Danny; Aliseda, Alberto

    2014-11-01

    Marine renewable energy is advancing towards commercialization, including electrical power generation from ocean, river, and tidal currents. The focus of this work is to develop numerical simulations capable of predicting the power generation potential of hydrokinetic turbine arrays-this includes analysis of unsteady and averaged flow fields, turbulence statistics, and unsteady loadings on turbine rotors and support structures due to interaction with rotor wakes and ambient turbulence. The governing equations of large-eddy-simulation (LES) are solved using a finite-volume method, and the presence of turbine blades are approximated by the actuator-line method in which hydrodynamic forces are projected to the flow field as a body force. The actuator-line approach captures helical wake formation including vortex shedding from individual blades, and the effects of drag and vorticity generation from the rough seabed surface are accounted for by wall-models. This LES framework was used to replicate a previous flume experiment consisting of three hydrokinetic turbines tested under various operating conditions and array layouts. Predictions of the power generation, velocity deficit and turbulence statistics in the wakes are compared between the LES and experimental datasets.

  6. Progress in the Variational Multiscale Formulation of Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Oberai, Assad

    2007-11-01

    In the variational multiscale (VMS) formulation of large eddy simulation subgrid models are introduced in the variational (or weak) formulation of the Navier Stokes equations and a-priori scale separation is accomplished using projection operators to create coarse and fine scales. This separation also leads to two sets of evolution equations: one for the coarse scales and another for the fine scales. The coarse scale equations are solved numerically while the fine scale equations are solved analytically to obtain an expression for the fine scales in terms of the coarse scales and hence achieve closure. Till date, the VMS formulation has lead to accurate results in the simulation of canonical turbulent flow problems. It has been implemented using spectral, finite element and finite volume methods. In this talk, for the incompressible Navier Stokes equations, we willpresent some new ideas for modeling the fine scales within the context of the VMS formulation and discuss their impact on the coarse scale solution. We will present a simple residual-based approximation for the fine scales that accurately models the cross-stress term and demonstrate that when this term is append with an eddy viscosity model for the Reynolds stress, a new mixed-model is obtained. The application of these ideas will be illustrated through some simple numerical examples.

  7. The UPSCALE project: a large simulation campaign

    NASA Astrophysics Data System (ADS)

    Mizielinski, Matthew; Roberts, Malcolm; Vidale, Pier Luigi; Schiemann, Reinhard; Demory, Marie-Estelle; Strachan, Jane

    2014-05-01

    The development of a traceable hierarchy of HadGEM3 global climate models, based upon the Met Office Unified Model, at resolutions from 135 km to 25 km, now allows the impact of resolution on the mean state, variability and extremes of climate to be studied in a robust fashion. In 2011 we successfully obtained a single-year grant of 144 million core hours of supercomputing time from the PRACE organization to run ensembles of 27 year atmosphere-only (HadGEM3-A GA3.0) climate simulations at 25km resolution, as used in present global weather forecasting, on HERMIT at HLRS. Through 2012 the UPSCALE project (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) ran over 650 years of simulation at resolutions of 25 km (N512), 60 km (N216) and 135 km (N96) to look at the value of high resolution climate models in the study of both present climate and a potential future climate scenario based on RCP8.5. Over 400 TB of data was produced using HERMIT, with additional simulations run on HECToR (UK supercomputer) and MONSooN (Met Office NERC Supercomputing Node). The data generated was transferred to the JASMIN super-data cluster, hosted by STFC CEDA in the UK, where analysis facilities are allowing rapid scientific exploitation of the data set. Many groups across the UK and Europe are already taking advantage of these facilities and we welcome approaches from other interested scientists. This presentation will briefly cover the following points; Purpose and requirements of the UPSCALE project and facilities used. Technical implementation and hurdles (model porting and optimisation, automation, numerical failures, data transfer). Ensemble specification. Current analysis projects and access to the data set. A full description of UPSCALE and the data set generated has been submitted to Geoscientific Model development, with overview information available from http://proj.badc.rl.ac.uk/upscale .

  8. Large Eddy Simulation of Supersonic Inlet Flows

    DTIC Science & Technology

    1998-04-01

    SIMULATION OF SUPERSONIC INLET FLOWS 6. AUTHOR(S) PROF. PARVIZ MOIN PROF. SANJIVA K. LELE 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) STANFORD... Parviz Moin and Sanjiva K. Lele Stanford University Mechanical Engineering, Flow Physics & Computation Division Stanford, CA 94305-3030 Prepared...monitor. I am thankful to Professor Sanjiva Lele and Profes- sor Parviz Moin, and Keith Lucas for useful discussions! I am grateful to Professor Peter

  9. Simulating stochastic dynamics using large time steps.

    PubMed

    Corradini, O; Faccioli, P; Orland, H

    2009-12-01

    We present an approach to investigate the long-time stochastic dynamics of multidimensional classical systems, in contact with a heat bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short- and long-time scales and both molecular dynamics or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time-resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor approximately 100 larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape.

  10. Simulation Design: Engaging Large Groups of Nurse Practitioner Students.

    PubMed

    Garnett, Susan; Weiss, Josie A; Winland-Brown, Jill E

    2015-09-01

    Little has been written about using human patient simulation to teach primary care management to large groups of nurse practitioner (NP) students. This article describes an innovative design for simulated clinical experiences based on a game show format. This large-group design was conceived as a way to overcome several challenges, particularly limited faculty resources, to integrating simulation into NP education. Progressive variations evolved from this foundation, including the use of observer-participant groups; initial and follow-up visits on the same simulated patient; and mentor-mentee collaborations. Student comments, while consistently positive about the simulated clinical experiences, have been used to guide revisions to strengthen the simulation program. The innovative large-group design enabled faculty to use simulation to enhance students' skills in primary care management. Faculties with similar challenges might find these strategies useful to replicate or adapt. Copyright 2015, SLACK Incorporated.

  11. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.

  12. Floating substructure flexibility of large-volume 10MW offshore wind turbine platforms in dynamic calculations

    NASA Astrophysics Data System (ADS)

    Borg, Michael; Melchior Hansen, Anders; Bredmose, Henrik

    2016-09-01

    Designing floating substructures for the next generation of 10MW and larger wind turbines has introduced new challenges in capturing relevant physical effects in dynamic simulation tools. In achieving technically and economically optimal floating substructures, structural flexibility may increase to the extent that it becomes relevant to include in addition to the standard rigid body substructure modes which are typically described through linear radiation-diffraction theory. This paper describes a method for the inclusion of substructural flexibility in aero-hydro-servo-elastic dynamic simulations for large-volume substructures, including wave-structure interactions, to form the basis of deriving sectional loads and stresses within the substructure. The method is applied to a case study to illustrate the implementation and relevance. It is found that the flexible mode is significantly excited in an extreme event, indicating an increase in predicted substructure internal loads.

  13. An experimental study on the excitation of large volume airguns in a small volume body of water

    NASA Astrophysics Data System (ADS)

    Wang, Baoshan; Yang, Wei; Yuan, Songyong; Guo, Shijun; Ge, Hongkui; Xu, Ping; Chen, Yong

    2010-12-01

    A large volume airgun array is effective in generating seismic waves, which is extensively used in large volume bodies of water such as oceans, lakes and reservoirs. So far, the application of large volume airguns is subject to the distribution of large volume bodies of water. This paper reports an attempt to utilize large volume airguns in a small body of water as a seismic source for seismotectonic studies. We carried out a field experiment in Mapaoquan pond, Fangshan district, Beijing, during the period 25-30 May 2009. Bolt LL1500 airguns, each with volumes of 2000 in3, the largest commercial airguns available today, were used in this experiment. We tested the excitation of the airgun array with one or two guns. The airgun array was placed 7-11 m below the water's surface. The near- and far-field seismic motions induced by the airgun source were recorded by a 100 km long seismic profile composed of 16 portable seismometers and a 100 m long strong motion seismograph profile, respectively. The following conclusions can be drawn from this experiment. First, it is feasible to excite large volume airguns in a small volume body of water. Second, seismic signals from a single shot of one airgun can be recognized at the offset up to 15 km. Taking advantage of high source repeatability, we stacked records from 128 shots to enhance the signal-to-noise ratio, and direct P-waves can be easily identified at the offset ~50 km in stacked records. Third, no detectable damage to fish or near-field constructions was caused by the airgun shots. Those results suggest that large volume airguns excited in small bodies of water can be used as a routinely operated seismic source for mid-scale (tens of kilometres) subsurface explorations and monitoring under various running conditions.

  14. Feasibility of large volume tumor ablation using multiple-mode strategy with fast scanning method: A numerical study

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Shen, Guofeng; Qiao, Shan; Chen, Yazhu

    2017-03-01

    Sonication with fast scanning method can generate homogeneous lesions without complex planning. But when the target region is large, switching focus too fast will reduce the heat accumulation, the margin of which may not ablated. Furthermore, high blood perfusion rate will reduce this maximum volume that can be ablated. Therefore, fast scanning method may not be applied to large volume tumor. To expand the therapy scope, this study combines the fast scan method with multiple mode strategy. Through simulation and experiment, the feasibility of this new strategy is evaluated and analyzed.

  15. An Ultrascalable Solution to Large-scale Neural Tissue Simulation

    PubMed Central

    Kozloski, James; Wagner, John

    2011-01-01

    Neural tissue simulation extends requirements and constraints of previous neuronal and neural circuit simulation methods, creating a tissue coordinate system. We have developed a novel tissue volume decomposition, and a hybrid branched cable equation solver. The decomposition divides the simulation into regular tissue blocks and distributes them on a parallel multithreaded machine. The solver computes neurons that have been divided arbitrarily across blocks. We demonstrate thread, strong, and weak scaling of our approach on a machine with more than 4000 nodes and up to four threads per node. Scaling synapses to physiological numbers had little effect on performance, since our decomposition approach generates synapses that are almost always computed locally. The largest simulation included in our scaling results comprised 1 million neurons, 1 billion compartments, and 10 billion conductance-based synapses and gap junctions. We discuss the implications of our ultrascalable Neural Tissue Simulator, and with our results estimate requirements for a simulation at the scale of a human brain. PMID:21954383

  16. Large-Scale Hybrid Dynamic Simulation Employing Field Measurements

    SciTech Connect

    Huang, Zhenyu; Guttromson, Ross T.; Hauer, John F.

    2004-06-30

    Simulation and measurements are two primary ways for power engineers to gain understanding of system behaviors and thus accomplish tasks in system planning and operation. Many well-developed simulation tools are available in today's market. On the other hand, large amount of measured data can be obtained from traditional SCADA systems and currently fast growing phasor networks. However, simulation and measurement are still two separate worlds. There is a need to combine the advantages of simulation and measurements. In view of this, this paper proposes the concept of hybrid dynamic simulation which opens up traditional simulation by providing entries for measurements. A method is presented to implement hybrid simulation with PSLF/PSDS. Test studies show the validity of the proposed hybrid simulation method. Applications of such hybrid simulation include system event playback, model validation, and software validation.

  17. Large Eddy Simulation of Homogeneous Rotating Turbulence

    NASA Technical Reports Server (NTRS)

    Squires, Kyle D.; Mansour, Nagi N.; Cambon, Claude; Chasnov, Jeffrey R.; Kutler, Paul (Technical Monitor)

    1994-01-01

    Study of turbulent flows in rotating reference frames has proven to be one of the more challenging areas of turbulence research. The large number of theoretical, experimental, and computational studies performed over the years have demonstrated that the effect of solid-body rotation on turbulent flows is subtle and remains exceedingly difficult to predict. Because of the complexities associated with non-homogeneous turbulence, it is worthwhile to examine the effect of steady system rotation on the evolution of an initially isotropic turbulent flow. The assumption of statistical homogeneity considerably simplifies analysis and computation; calculation of homogeneous turbulence is further motivated since it possesses the essential physics found in more complex rotating flows. The principal objectives of the present study have therefore been to increase our fundamental understanding of turbulent flows in rotating reference frames through an examination of the asymptotic state of homogeneous rotating turbulence; particularly as to the existence of an asymptotic state which is self similar. Knowledge of an asymptotic similarity state permits prediction of the ultimate statistical evolution of the flow without requiring detailed knowledge of the complex, and not well understood, non-linear transfer processes. Aside from examination of possible similarity states in rotating turbulence, of further interest in this study has been an examination of the degree to which solid-body rotation induces a two-dimensional state in an initially isotropic flow.

  18. Vermont Yankee simulator qualification: large-break LOCA

    SciTech Connect

    Loomis, J.N.; Fernandez, R.T.

    1987-01-01

    Yankee Atomic Electric Company (YAEC) has developed simulator benchmark capabilities for the Seabrook, Maine Yankee, and Vermont Yankee Nuclear Power Station (VYNPS) simulators. The goal is to establish that each simulator has a satisfactory real-time response for different scenarios that will enhance operator training. Vermont Yankee purchased a full-scope plane simulator for the VYNPS, a four-unit boiling water reactor with a Mark-I containment. The following seven benchmark cases were selected by YAEC and VYNPC to supplement the Simulator Acceptance Test Program: (1) control rod swap; (2) partial reactor scram; (3) recirculation pump trip; (4) main steam isolation valve (MSIV) closure without scram, (5) main steamline break, (6) small-break loss-of-coolant accident (LOCA), and (7) large-break LOCA. Five simulator benchmark sessions have been completed. Each session identified simulator capabilities and limitations that needed correction. This paper discusses results from the latest large-break LOCA case.

  19. Large-Eddy Simulation of Wind-Plant Aerodynamics

    SciTech Connect

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.

  20. The 1980 Large space systems technology. Volume 2: Base technology

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    Technology pertinent to large antenna systems, technology related to large space platform systems, and base technology applicable to both antenna and platform systems are discussed. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. A total systems approach including controls, platforms, and antennas is presented as a cohesive, programmatic plan for large space systems.

  1. Numerical simulations of volume holographic imaging system resolution characteristics

    NASA Astrophysics Data System (ADS)

    Sun, Yajun; Jiang, Zhuqing; Liu, Shaojie; Tao, Shiquan

    2009-05-01

    Because of the Bragg selectivity of volume holographic gratings, it helps VHI system to optically segment the object space. In this paper, properties of point-source diffraction imaging in terms of the point-spread function (PSF) are investigated, and characteristics of depth and lateral resolutions in a VHI system is numerically simulated. The results show that the observed diffracted field obviously changes with the displacement in the z direction, and is nearly unchanged with displacement in the x and y directions. The dependence of the diffracted imaging field on the z-displacement provides a way to possess 3-D image by VHI.

  2. Numerical simulation of the decay of swirling flow in a constant volume engine simulator

    SciTech Connect

    Cloutman, L.D.

    1986-05-01

    The KIVA and COYOTE computer programs were used to simulate the decay of turbulent swirling flow in a constant-volume combustion bomb. The results are in satisfactory agreement with the measurement of both swirl velocity and temperature. Predictions of secondary flows and suggestions for future research also are presented. 14 refs., 15 figs.

  3. An Adaptive Multiscale Finite Element Method for Large Scale Simulations

    DTIC Science & Technology

    2015-09-28

    the method . Using the above definitions , the weak statement of the non-linear local problem at the kth 4 DISTRIBUTION A: Distribution approved for...AFRL-AFOSR-VA-TR-2015-0305 An Adaptive Multiscale Finite Element Method for Large Scale Simulations Carlos Duarte UNIVERSITY OF ILLINOIS CHAMPAIGN...14-07-2015 4. TITLE AND SUBTITLE An Adaptive Multiscale Generalized Finite Element Method for Large Scale Simulations 5a.  CONTRACT NUMBER 5b

  4. Climate Simulations with an Isentropic Finite Volume Dynamical Core

    SciTech Connect

    Chen, Chih-Chieh; Rasch, Philip J.

    2012-04-15

    This paper discusses the impact of changing the vertical coordinate from a hybrid pressure to a hybrid-isentropic coordinate within the finite volume dynamical core of the Community Atmosphere Model (CAM). Results from a 20-year climate simulation using the new model coordinate configuration are compared to control simulations produced by the Eulerian spectral and FV dynamical cores of CAM which both use a pressure-based ({sigma}-p) coordinate. The same physical parameterization package is employed in all three dynamical cores. The isentropic modeling framework significantly alters the simulated climatology and has several desirable features. The revised model produces a better representation of heat transport processes in the atmosphere leading to much improved atmospheric temperatures. We show that the isentropic model is very effective in reducing the long standing cold temperature bias in the upper troposphere and lower stratosphere, a deficiency shared among most climate models. The warmer upper troposphere and stratosphere seen in the isentropic model reduces the global coverage of high clouds which is in better agreement with observations. The isentropic model also shows improvements in the simulated wintertime mean sea-level pressure field in the northern hemisphere.

  5. Development of large volume double ring penning plasma discharge source for efficient light emissions.

    PubMed

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj; Prajapati, Jitendra; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana

    2012-12-01

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd(2)Fe(14)B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic field of ~0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is ~2 × 10(11) cm(-3), which is around one order of magnitude larger than that of single ring arrangement.

  6. Development of large volume double ring penning plasma discharge source for efficient light emissions

    SciTech Connect

    Prakash, Ram; Vyas, Gheesa Lal; Jain, Jalaj; Prajapati, Jitendra; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana

    2012-12-15

    In this paper, the development of large volume double ring Penning plasma discharge source for efficient light emissions is reported. The developed Penning discharge source consists of two cylindrical end cathodes of stainless steel having radius 6 cm and a gap 5.5 cm between them, which are fitted in the top and bottom flanges of the vacuum chamber. Two stainless steel anode rings with thickness 0.4 cm and inner diameters 6.45 cm having separation 2 cm are kept at the discharge centre. Neodymium (Nd{sub 2}Fe{sub 14}B) permanent magnets are physically inserted behind the cathodes for producing nearly uniform magnetic field of {approx}0.1 T at the center. Experiments and simulations have been performed for single and double anode ring configurations using helium gas discharge, which infer that double ring configuration gives better light emissions in the large volume Penning plasma discharge arrangement. The optical emission spectroscopy measurements are used to complement the observations. The spectral line-ratio technique is utilized to determine the electron plasma density. The estimated electron plasma density in double ring plasma configuration is {approx}2 Multiplication-Sign 10{sup 11} cm{sup -3}, which is around one order of magnitude larger than that of single ring arrangement.

  7. Testbed for large volume surveillance through distributed fusion and resource management

    NASA Astrophysics Data System (ADS)

    Valin, Pierre; Guitouni, Adel; Bossé, Éloi; Wehn, Hans; Yates, Richard; Zwick, Harold

    2007-04-01

    DRDC Valcartier has initiated, through a PRECARN partnership project, the development of an advanced simulation testbed for the evaluation of the effectiveness of Network Enabled Operations in a coastal large volume surveillance situation. The main focus of this testbed is to study concepts like distributed information fusion, dynamic resources and networks configuration management, and self synchronising units and agents. This article presents the requirements, design and first implementation builds, and reports on some preliminary results. The testbed allows to model distributed nodes performing information fusion, dynamic resource management planning and scheduling, as well as configuration management, given multiple constraints on the resources and their communications networks. Two situations are simulated: cooperative and non-cooperative target search. A cooperative surface target behaves in ways to be detected (and rescued), while an elusive target attempts to avoid detection. The current simulation consists of a networked set of surveillance assets including aircraft (UAVs, helicopters, maritime patrol aircraft), and ships. These assets have electrooptical and infrared sensors, scanning and imaging radar capabilities. Since full data sharing over datalinks is not feasible, own-platform data fusion must be simulated to evaluate implementation and performance of distributed information fusion. A special emphasis is put on higher-level fusion concepts using knowledge-based rules, with level 1 fusion already providing tracks. Surveillance platform behavior is also simulated in order to evaluate different dynamic resource management algorithms. Additionally, communication networks are modeled to simulate different information exchange concepts. The testbed allows the evaluation of a range of control strategies from independent platform search, through various levels of platform collaboration, up to a centralized control of search platforms.

  8. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  9. Large Scale Simulations of the Kinetic Ising Model

    NASA Astrophysics Data System (ADS)

    Münkel, Christian

    We present Monte Carlo simulation results for the dynamical critical exponent z of the two- and three-dimensional kinetic Ising model. The z-values were calculated from the magnetization relaxation from an ordered state into the equilibrium state at Tc for very large systems with up to (169984)2 and (3072)3 spins. To our knowledge, these are the largest Ising-systems simulated todate. We also report the successful simulation of very large lattices on a massively parallel MIMD computer with high speedups of approximately 1000 and an efficiency of about 0.93.

  10. Large volume liquid helium relief device verifacation apparatus for the alpha magnetic spectrometer

    NASA Astrophysics Data System (ADS)

    Klimas, Richard John; McIntyre, P.; Colvin, John; Zeigler, John; Van Sciver, Steven; Ting, Samual

    2012-06-01

    Here we present details of an experiment for verifying the liquid helium vessel relief device for the Alpha Magnetic Spectrometer-02 (AMS-02). The relief device utilizes a series of rupture discs designed to open in the event of a vacuum failure of the AMS-02 cryogenic system. A failure of this type is classified to be a catastrophic loss of insulating vacuum accident. This apparatus differs from other approaches due to the size of the test volumes used. The verification apparatus consists of a 250 liter vessel used for the test quantity of liquid helium that is located inside a vacuum insulated vessel. A large diameter valve is suddenly opened to simulate the loss of insulating vacuum in a repeatable manner. Pressure and temperature vs. time data are presented and discussed in the context of the AMS-02 hardware configuration.

  11. New material model for simulating large impacts on rocky bodies

    NASA Astrophysics Data System (ADS)

    Tonge, A.; Barnouin, O.; Ramesh, K.

    2014-07-01

    Large impact craters on an asteroid can provide insights into its internal structure. These craters can expose material from the interior of the body at the impact site [e.g., 1]; additionally, the impact sends stress waves throughout the body, which interrogate the asteroid's interior. Through a complex interplay of processes, such impacts can result in a variety of motions, the consequence of which may appear as lineaments that are exposed over all or portions of the asteroid's surface [e.g., 2,3]. While analytic, scaling, and heuristic arguments can provide some insight into general phenomena on asteroids, interpreting the results of a specific impact event, or series of events, on a specific asteroid geometry generally necessitates the use of computational approaches that can solve for the stress and displacement history resulting from an impact event. These computational approaches require a constitutive model for the material, which relates the deformation history of a small material volume to the average force on the boundary of that material volume. In this work, we present a new material model that is suitable for simulating the failure of rocky materials during impact events. This material model is similar to the model discussed in [4]. The new material model incorporates dynamic sub-scale crack interactions through a micro-mechanics-based damage model, thermodynamic effects through the use of a Mie-Gruneisen equation of state, and granular flow of the fully damaged material. The granular flow model includes dilatation resulting from the mutual interaction of small fragments of material (grains) as they are forced to slide and roll over each other and includes a P-α type porosity model to account for compaction of the granular material in a subsequent impact event. The micro-mechanics-based damage model provides a direct connection between the flaw (crack) distribution in the material and the rate-dependent strength. By connecting the rate

  12. Lobe Emplacement of a Large-Volume, Evolved lava flow: Large-scale Pahoehoe

    NASA Astrophysics Data System (ADS)

    Semple, A. M.; Gregg, T.; Bonnichsen, B.; Godchaux, M.

    2006-12-01

    The Bruneau-Jarbidge eruptive center (BJEC) in southwestern Idaho is responsible for more than 10 large- volume lava flows ranging from a few km3 to > 200 km3. These Miocene flows have high SiO2 contents of between 70 and 75 wt% and range in thickness from a few tens of meters to 200 m thick. Well exposed in deep canyon walls, these flows typically display massive, columnar jointed interiors which give way to marginal outcrops with more lobate upper surfaces and more irregular jointing. Also observed at the most distal reaches are sub-circular shaped outcrops about 6 15 m across exposed in the canyon walls. These sub-circular outcrops display a specific jointing pattern, and are inferred to be the cross- section of individual flow lobes. These lobes tend to display a massive exterior rind of 1 1.5 m thickness with crude jointing perpendicular to the outside. Inside this massive exterior is an area of densely jointed rock, where the joints are roughly concentric to the exterior rind and are 1 4 cm thick. Not always present is a massive center that has crude radial jointing. This pattern of jointing probably results from the passage of the rhyolite lavas under a solidified carapace, with the sub-concentric jointing caused by lava shearing between the mobile lobe interior and the exterior carapace. In this way, the emplacement of these lavas appears to be similar to that of pahoehoe, in which lava advances by lobes or toes protruding from the flow front and there is coalescence of the flow lobes in the flow interior.

  13. A finite volume model simulation for the Broughton Archipelago, Canada

    NASA Astrophysics Data System (ADS)

    Foreman, M. G. G.; Czajko, P.; Stucchi, D. J.; Guo, M.

    A finite volume circulation model is applied to the Broughton Archipelago region of British Columbia, Canada and used to simulate the three-dimensional velocity, temperature, and salinity fields that are required by a companion model for sea lice behaviour, development, and transport. The absence of a high resolution atmospheric model necessitated the installation of nine weather stations throughout the region and the development of a simple data assimilation technique that accounts for topographic steering in interpolating/extrapolating the measured winds to the entire model domain. The circulation model is run for the period of March 13-April 3, 2008 and correlation coefficients between observed and model currents, comparisons between model and observed tidal harmonics, and root mean square differences between observed and model temperatures and salinities all showed generally good agreement. The importance of wind forcing in the near-surface circulation, differences between this simulation and one computed with another model, the effects of bathymetric smoothing on channel velocities, further improvements necessary for this model to accurately simulate conditions in May and June, and the implication of near-surface current patterns at a critical location in the 'migration corridor' of wild juvenile salmon, are also discussed.

  14. Modification of a very large thermal-vacuum test chamber for ionosphere and plasmasphere simulation

    NASA Technical Reports Server (NTRS)

    Pearson, O. L.

    1978-01-01

    No large-volume chamber existed which could simulate the ion and electron environment of near-earth space. A very large thermal-vacuum chamber was modified to provide for the manipulation of the test volume magnetic field and for the generation and monitoring of plasma. Plasma densities of 1 million particles per cu cm were generated in the chamber where a variable magnetic flux density of up to 0.00015 T (1.5 gauss) was produced. Plasma temperature, density, composition, and visual effects were monitored, and plasma containment and control were investigated. Initial operation of the modified chamber demonstrated a capability satisfactory for a wide variety of experiments and hardware tests which require an interaction with the plasma environment. Potential for improving the quality of the simulation exists.

  15. RADON DIAGNOSTIC MEASUREMENT GUIDANCE FOR LARGE BUILDINGS - VOLUME 2. APPENDICES

    EPA Science Inventory

    The report discusses the development of radon diagnostic procedures and mitigation strategies applicable to a variety of large non-residential buildings commonly found in Florida. The investigations document and evaluate the nature of radon occurrence and entry mechanisms for rad...

  16. Large space telescope, phase A. Volume 5: Support systems module

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the support systems module for the Large Space Telescope are discussed. The following systems and described: (1) thermal control, (2) electrical, (3) communication and data landing, (4) attitude control system, and (5) structural features. Analyses of maintainability and reliability considerations are included.

  17. Large space telescope, phase A. Volume 4: Scientific instrument package

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design and characteristics of the scientific instrument package for the Large Space Telescope are discussed. The subjects include: (1) general scientific objectives, (2) package system analysis, (3) scientific instrumentation, (4) imaging photoelectric sensors, (5) environmental considerations, and (6) reliability and maintainability.

  18. Large space telescope, phase A. Volume 3: Optical telescope assembly

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the optical telescope assembly for the Large Space Telescope are discussed. The systems considerations are based on mission-related parameters and optical equipment requirements. Information is included on: (1) structural design and analysis, (2) thermal design, (3) stabilization and control, (4) alignment, focus, and figure control, (5) electronic subsystem, and (6) scientific instrument design.

  19. RADON DIAGNOSTIC MEASUREMENT GUIDANCE FOR LARGE BUILDINGS - VOLUME 2. APPENDICES

    EPA Science Inventory

    The report discusses the development of radon diagnostic procedures and mitigation strategies applicable to a variety of large non-residential buildings commonly found in Florida. The investigations document and evaluate the nature of radon occurrence and entry mechanisms for rad...

  20. Large Eddy Simulation of Pollen Transport in the Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Chamecki, Marcelo; Meneveau, Charles; Parlange, Marc B.

    2007-11-01

    The development of genetically modified crops and questions about cross-pollination and contamination of natural plant populations enhanced the importance of understanding wind dispersion of airborne pollen. The main objective of this work is to simulate the dispersal of pollen grains in the atmospheric surface layer using large eddy simulation. Pollen concentrations are simulated by an advection-diffusion equation including gravitational settling. Of great importance is the specification of the bottom boundary conditions characterizing the pollen source over the canopy and the deposition process everywhere else. The velocity field is discretized using a pseudospectral approach. However the application of the same discretization scheme to the pollen equation generates unphysical solutions (i.e. negative concentrations). The finite-volume bounded scheme SMART is used for the pollen equation. A conservative interpolation scheme to determine the velocity field on the finite volume surfaces was developed. The implementation is validated against field experiments of point source and area field releases of pollen.

  1. Large-volume leukaphereses may be more efficient than standard-volume leukaphereses for collection of peripheral blood progenitor cells.

    PubMed

    Passos-Coelho, J L; Machado, M A; Lúcio, P; Leal-Da-Costa, F; Silva, M R; Parreira, A

    1997-10-01

    To overcome the need for multiple leukaphereses to collect enough PBPC for autologous transplantation, large-volume leukaphereses (LVL) are used to process multiple blood volumes per session. We compared the efficiency of CD34+ cell collection by LVL (n = 63; median blood volumes processed 11.1) with that of standard-volume leukaphereses (SVL) (n = 38; median blood volumes processed 1.9). To achieve this in patients with different peripheral blood concentrations of CD34+ cells, we analyzed the ratio of CD34+ cells collected per unit of blood volume processed, divided by the number of CD34+ cells in total blood volume at the beginning of apheresis. For LVL, 30% (9%-323%) of circulating CD34+ cells were collected per blood volume compared with 42% (7%-144%) for SVL (p = 0.02). However, in LVL patients, peripheral blood CD34+ cells/L decreased a median of 54% during LVL (similar data for SVL not available). The number of CD34+ cells collected per blood volume processed after 4 and 8 blood volumes and at the end of LVL were 0.32 (0.01-2.05), 0.24 (0.01-1.68), and 0.22 (0.01-2.40) x 10(6) CD34+ cells/kg, respectively (p = 0.0007), despite the 54% decrease in peripheral blood CD34+ cells/L throughout LVL. A median 66% decrease in the platelet count was also observed during LVL. Thus, LVL may be more efficient than SVL for PBPC collection, allowing, in most patients, the collection in one LVL of sufficient PBPC to support autologous transplantation.

  2. Constrained Large Eddy Simulation of Separated Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Xia, Zhenhua; Shi, Yipeng; Wang, Jianchun; Xiao, Zuoli; Yang, Yantao; Chen, Shiyi

    2011-11-01

    Constrained Large-eddy Simulation (CLES) has been recently proposed to simulate turbulent flows with massive separation. Different from traditional large eddy simulation (LES) and hybrid RANS/LES approaches, the CLES simulates the whole flow domain by large eddy simulation while enforcing a RANS Reynolds stress constraint on the subgrid-scale (SGS) stress models in the near-wall region. Algebraic eddy-viscosity models and one-equation Spalart-Allmaras (S-A) model have been used to constrain the Reynolds stress. The CLES approach is validated a posteriori through simulation of flow past a circular cylinder and periodic hill flow at high Reynolds numbers. The simulation results are compared with those from RANS, DES, DDES and other available hybrid RANS/LES methods. It is shown that the capability of the CLES method in predicting separated flows is comparable to that of DES. Detailed discussions are also presented about the effects of the RANS models as constraint in the near-wall layers. Our results demonstrate that the CLES method is a promising alternative towards engineering applications.

  3. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-10-20

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048{sup 3} dark matter particles, 2048{sup 3} gas cells, and 17 billion adaptive rays in a L = 100 Mpc h {sup –1} box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h {sup –1}). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h {sup –1}) in order to make mock observations and theoretical predictions.

  4. Simulating Longitudinal Brain MRIs with Known Volume Changes and Realistic Variations in Image Intensity

    PubMed Central

    Khanal, Bishesh; Ayache, Nicholas; Pennec, Xavier

    2017-01-01

    This paper presents a simulator tool that can simulate large databases of visually realistic longitudinal MRIs with known volume changes. The simulator is based on a previously proposed biophysical model of brain deformation due to atrophy in AD. In this work, we propose a novel way of reproducing realistic intensity variation in longitudinal brain MRIs, which is inspired by an approach used for the generation of synthetic cardiac sequence images. This approach combines a deformation field obtained from the biophysical model with a deformation field obtained by a non-rigid registration of two images. The combined deformation field is then used to simulate a new image with specified atrophy from the first image, but with the intensity characteristics of the second image. This allows to generate the realistic variations present in real longitudinal time-series of images, such as the independence of noise between two acquisitions and the potential presence of variable acquisition artifacts. Various options available in the simulator software are briefly explained in this paper. In addition, the software is released as an open-source repository. The availability of the software allows researchers to produce tailored databases of images with ground truth volume changes; we believe this will help developing more robust brain morphometry tools. Additionally, we believe that the scientific community can also use the software to further experiment with the proposed model, and add more complex models of brain deformation and atrophy generation. PMID:28381986

  5. Simulating Longitudinal Brain MRIs with Known Volume Changes and Realistic Variations in Image Intensity.

    PubMed

    Khanal, Bishesh; Ayache, Nicholas; Pennec, Xavier

    2017-01-01

    This paper presents a simulator tool that can simulate large databases of visually realistic longitudinal MRIs with known volume changes. The simulator is based on a previously proposed biophysical model of brain deformation due to atrophy in AD. In this work, we propose a novel way of reproducing realistic intensity variation in longitudinal brain MRIs, which is inspired by an approach used for the generation of synthetic cardiac sequence images. This approach combines a deformation field obtained from the biophysical model with a deformation field obtained by a non-rigid registration of two images. The combined deformation field is then used to simulate a new image with specified atrophy from the first image, but with the intensity characteristics of the second image. This allows to generate the realistic variations present in real longitudinal time-series of images, such as the independence of noise between two acquisitions and the potential presence of variable acquisition artifacts. Various options available in the simulator software are briefly explained in this paper. In addition, the software is released as an open-source repository. The availability of the software allows researchers to produce tailored databases of images with ground truth volume changes; we believe this will help developing more robust brain morphometry tools. Additionally, we believe that the scientific community can also use the software to further experiment with the proposed model, and add more complex models of brain deformation and atrophy generation.

  6. Real-time visualization of large volume datasets on standard PC hardware.

    PubMed

    Xie, Kai; Yang, Jie; Zhu, Y M

    2008-05-01

    In medical area, interactive three-dimensional volume visualization of large volume datasets is a challenging task. One of the major challenges in graphics processing unit (GPU)-based volume rendering algorithms is the limited size of texture memory imposed by current GPU architecture. We attempt to overcome this limitation by rendering only visible parts of large CT datasets. In this paper, we present an efficient, high-quality volume rendering algorithm using GPUs for rendering large CT datasets at interactive frame rates on standard PC hardware. We subdivide the volume dataset into uniform sized blocks and take advantage of combinations of early ray termination, empty-space skipping and visibility culling to accelerate the whole rendering process and render visible parts of volume data. We have implemented our volume rendering algorithm for a large volume data of 512 x 304 x 1878 dimensions (visible female), and achieved real-time performance (i.e., 3-4 frames per second) on a Pentium 4 2.4GHz PC equipped with NVIDIA Geforce 6600 graphics card ( 256 MB video memory). This method can be used as a 3D visualization tool of large CT datasets for doctors or radiologists.

  7. Rayleigh-Taylor mixing: direct numerical simulation and implicit large eddy simulation

    NASA Astrophysics Data System (ADS)

    Youngs, David L.

    2017-07-01

    Previous research into three-dimensional numerical simulation of self-similar mixing due to Rayleigh-Taylor instability is summarized. A range of numerical approaches has been used: direct numerical simulation, implicit large eddy simulation and large eddy simulation with an explicit model for sub-grid-scale dissipation. However, few papers have made direct comparisons between the various approaches. The main purpose of the current paper is to give comparisons of direct numerical simulations and implicit large eddy simulations using the same computational framework. Results are shown for four test cases: (i) single-mode Rayleigh-Taylor instability, (ii) self-similar Rayleigh-Taylor mixing, (iii) three-layer mixing and (iv) a tilted-rig Rayleigh-Taylor experiment. It is found that both approaches give similar results for the high-Reynolds number behavior. Direct numerical simulation is needed to assess the influence of finite Reynolds number.

  8. Large-scale simulations of complex physical systems

    NASA Astrophysics Data System (ADS)

    Belić, A.

    2007-04-01

    Scientific computing has become a tool as vital as experimentation and theory for dealing with scientific challenges of the twenty-first century. Large scale simulations and modelling serve as heuristic tools in a broad problem-solving process. High-performance computing facilities make possible the first step in this process - a view of new and previously inaccessible domains in science and the building up of intuition regarding the new phenomenology. The final goal of this process is to translate this newly found intuition into better algorithms and new analytical results. In this presentation we give an outline of the research themes pursued at the Scientific Computing Laboratory of the Institute of Physics in Belgrade regarding large-scale simulations of complex classical and quantum physical systems, and present recent results obtained in the large-scale simulations of granular materials and path integrals.

  9. Large-scale simulations of complex physical systems

    SciTech Connect

    Belic, A.

    2007-04-23

    Scientific computing has become a tool as vital as experimentation and theory for dealing with scientific challenges of the twenty-first century. Large scale simulations and modelling serve as heuristic tools in a broad problem-solving process. High-performance computing facilities make possible the first step in this process - a view of new and previously inaccessible domains in science and the building up of intuition regarding the new phenomenology. The final goal of this process is to translate this newly found intuition into better algorithms and new analytical results.In this presentation we give an outline of the research themes pursued at the Scientific Computing Laboratory of the Institute of Physics in Belgrade regarding large-scale simulations of complex classical and quantum physical systems, and present recent results obtained in the large-scale simulations of granular materials and path integrals.

  10. Evaluation of Large Volume SrI2(Eu) Scintillator Detectors

    SciTech Connect

    Sturm, B W; Cherepy, N J; Drury, O B; Thelin, P A; Fisher, S E; Magyar, A F; Payne, S A; Burger, A; Boatner, L A; Ramey, J O; Shah, K S; Hawrami, R

    2010-11-18

    There is an ever increasing demand for gamma-ray detectors which can achieve good energy resolution, high detection efficiency, and room-temperature operation. We are working to address each of these requirements through the development of large volume SrI{sub 2}(Eu) scintillator detectors. In this work, we have evaluated a variety of SrI{sub 2} crystals with volumes >10 cm{sup 3}. The goal of this research was to examine the causes of energy resolution degradation for larger detectors and to determine what can be done to mitigate these effects. Testing both packaged and unpackaged detectors, we have consistently achieved better resolution with the packaged detectors. Using a collimated gamma-ray source, it was determined that better energy resolution for the packaged detectors is correlated with better light collection uniformity. A number of packaged detectors were fabricated and tested and the best spectroscopic performance was achieved for a 3% Eu doped crystal with an energy resolution of 2.93% FWHM at 662keV. Simulations of SrI{sub 2}(Eu) crystals were also performed to better understand the light transport physics in scintillators and are reported. This study has important implications for the development of SrI{sub 2}(Eu) detectors for national security purposes.

  11. Computation and volume rendering of large-scale EOF coherent modes in rotating turbulent flow data

    NASA Astrophysics Data System (ADS)

    Ostrouchov, G.; Pugmire, D.; Rosenberg, D. L.; Chen, W.; Pouquet, A.

    2013-12-01

    The computation of empirical orthogonal functions (EOF) is used to extract major coherent modes of variability in spatio-temporal data. We explore the computation of EOF in three spatial dimensions over time and present the result with volume rendering software. To accomplish this, we use an HPC extension of the R language, pbdR (see r-pbd.org), that we embed in the VisIt visualization system. VisIt provides parallel data reader capability as well as the volume rendering ability to present the computed EOFs. The data we consider derives from direct numerical simulation on a grid of 20483 points of rapidly rotating turbulent flows that are forced at intermediate scales. Injection of energy at these scales at small Rossby number (~0.04) leads to a direct cascade of energy to small scales, and an inverse cascade to large scales. We will use pdbR to examine the spatio-temporal interactions and ergodicity of waves and turbulent eddies in these flows.

  12. Controlled multibody dynamics simulation for large space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Wu, S. C.; Chang, C. W.

    1989-01-01

    Multibody dynamics discipline, and dynamic simulation in control structure interaction (CSI) design are discussed. The use, capabilities, and architecture of the Large Angle Transient Dynamics (LATDYN) code as a simulation tool are explained. A generic joint body with various types of hinge connections; finite element and element coordinate systems; results of a flexible beam spin-up on a plane; mini-mast deployment; space crane and robotic slewing manipulations; a potential CSI test article; and multibody benchmark experiments are also described.

  13. Large-Eddy Simulation of turbine wake in complex terrain

    NASA Astrophysics Data System (ADS)

    Berg, J.; Troldborg, N.; Sørensen, N. N.; Patton, E. G.; Sullivan, P. P.

    2017-05-01

    We present Large-Eddy Simulation results of a turbine wake in realistic complex terrain with slopes above 0.5. By comparing simulations including and without the wind turbine we can estimate the induction factor, a, and we show how the presence of a strong recirculation zone in the terrain dictates the positioning of the wake. This last finding is in contrast to what would happen in gentle terrain with no substantial increase of turbulent kinetic energy in the terrain induced wakes.

  14. Use of cryopumps on large space simulation systems

    NASA Technical Reports Server (NTRS)

    Mccrary, L. E.

    1980-01-01

    The need for clean, oil free space simulation systems has demanded the development of large, clean pumping systems. The assurance of optically dense liquid nitrogen baffles over diffusion pumps prevents backstreaming to a large extent, but does not preclude contamination from accidents or a control failure. Turbomolecular pumps or ion pumps achieve oil free systems but are only practical for relatively small chambers. Large cryopumps were developed and checked out which do achieve clean pumping of very large chambers. These pumps can be used as the original pumping system or can be retrofitted as a replacement for existing diffusion pumps.

  15. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  16. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  17. Applications of large eddy simulation methods to gyrokinetic turbulence

    SciTech Connect

    Bañón Navarro, A. Happel, T.; Teaca, B. [Applied Mathematics Research Centre, Coventry University, Coventry CV1 5FB; Max-Planck für Sonnensystemforschung, Max-Planck-Str. 2, D-37191 Katlenburg-Lindau; Max-Planck Jenko, F. [Max-Planck-Institut für Plasmaphysik, EURATOM Association, D-85748 Garching; Max-Planck Hammett, G. W. [Max-Planck Collaboration: ASDEX Upgrade Team

    2014-03-15

    The large eddy simulation (LES) approach—solving numerically the large scales of a turbulent system and accounting for the small-scale influence through a model—is applied to nonlinear gyrokinetic systems that are driven by a number of different microinstabilities. Comparisons between modeled, lower resolution, and higher resolution simulations are performed for an experimental measurable quantity, the electron density fluctuation spectrum. Moreover, the validation and applicability of LES is demonstrated through a series of diagnostics based on the free energetics of the system.

  18. Properties of shallow convection from Large-Eddie simulations

    NASA Astrophysics Data System (ADS)

    Denby, Leif; Herzog, Michael

    2017-04-01

    Utilizing Large-Eddie simulations (LES) of isolated individual convective clouds in an idealised conditionally unstable atmosphere and large-domain LES simulations of radiative-convective equilibrium (RCE) from the RICO measuring campaign (Rauber et al. 2007), vertical profiles of individual clouds and statistical properties of the cloud ensemble have been extracted and compared against predictions by an 1D entraining parcel model and against the cloud-ensemble model of the CCFM (Wagner and Graf 2010) convection scheme (which comprises a solution of a Lotka-Volterra population dynamics system). For the simulations of isolated clouds it was possible to achieve agreement with the entraining parcel model when simulations were carried out with 2D axisymmetry and the entrainment rate was prescribed using an entraining profile estimated from LES simulation using a passive tracer (in place of the traditional Morton- Turner entrainment rate parameterisation), this agreement was not achieved when comparing against 3D simulations. Integrating the entraining parcel model using the horizontal mean environment profile of the RCE simulation (and so the vertical profile as would be predicted by a climate model) it was not possible to achieve the variation in cloud-top height seen in the RCE simulation, even when greatly increasing the entrainment rate. However, if the near-environment of a convective cloud was used as the environmental profile the variation in cloud-top height was achieved (by varying the cloud-base state variables within values extracted from RCE simulation). This indicates that the near-cloud environment is significantly different that the horizontal mean environment and must be taken into account if the effect of entrainment is to be correctly captured in parameterisations for convection. Finally, size-distribution of convective clouds extracted from RCE simulation showed qualitative agreement with predictions of CCFM's spectrum model.

  19. Large-volume protein crystal growth for neutron macromolecular crystallography

    DOE PAGES

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; ...

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for themore » growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.« less

  20. Large-volume protein crystal growth for neutron macromolecular crystallography

    SciTech Connect

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay

    2015-03-30

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. We report that these include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.

  1. Large-volume protein crystal growth for neutron macromolecular crystallography

    PubMed Central

    Ng, Joseph D.; Baird, James K.; Coates, Leighton; Garcia-Ruiz, Juan M.; Hodge, Teresa A.; Huang, Sijay

    2015-01-01

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. These include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations. PMID:25849493

  2. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  3. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  4. An instrument for collecting discrete large-volume water samples suitable for ecological studies of microorganisms

    NASA Astrophysics Data System (ADS)

    Wommack, K. Eric; Williamson, Shannon J.; Sundbergh, Arthur; Helton, Rebekah R.; Glazer, Brian T.; Portune, Kevin; Craig Cary, S.

    2004-11-01

    Microbiological investigations utilizing molecular genetic approaches to characterize microbial communities can require large volume water samples, tens to hundreds of liters. The requirement for large volume samples can be especially challenging in deep-sea hydrothermal vent environments of the oceanic ridge system. By and large studies of these environments rely on deep submergence vehicles. However collection of large volume (>100 L) water samples adjacent to the benthos is not feasible due to weight considerations. To address the technical difficulty of collecting large volume water samples from hydrothermal diffuse flow environments, a semi-autonomous large-volume water sampler (LVWS) was designed. The LVWS is capable of reliably collecting and bringing to the surface 120 L water samples from diffuse flow environments. Microscopy, molecular genetic and chemical analyses of water samples taken from 9°N East Pacific Rise are shown to demonstrate the utility of the LVWS for studies of near-benthos environments. To our knowledge this is the first report of virioplankton abundance within diffuse-flow waters of a deep-sea hydrothermal vent environment. Because of its simple design and relatively low cost, the LVWS should be applicable to a variety of studies which require large-volume water samples collected immediately adjacent to the benthos.

  5. Methods for molecular interactions and large-scale simulations

    NASA Astrophysics Data System (ADS)

    Jeon, Byoungseon

    Molecular Dynamics (MD) is one of the powerful methods for studying the complexity of large ensembles of particles in various states of matter. This thesis describes work in advancing selective applications of computational molecular dynamics. First, the detailed interaction between methyl-thiol molecules and a Au(111) surface is investigated through extensive state-of-the-art first principles calculations. The quantum simulation results are used to fit a classical many-body surface potential, which can be conveniently implemented into MD simulations of alkane-thiol ensembles on a Au(111) surface. Also a coarse-grained MD code is developed, and the effect of thiol densities and alkane-chain lengths on self-assembled monolayers is examined. Second, ultracold neutral plasmas with open boundary are investigated with all pair-wise calculations, parallel TREE, and a mean field potential. Using two-component plasma (TCP) analysis and large-scale parallel processing, simulations of realistically large configurations are conducted. In addition to TCP, the mean field theory facilitates the simple description of background electrons, and full scale simulations of ultracold plasma evolution are presented. Finally, two-temperature systems of two-component plasmas with extremely high density and temperatures are examined for thermal mixing and equilibration between the components. Electrostatic interactions are evaluated with periodic boundary conditions, and bare/reduced ion mass simulations are conducted for the balance between numerical efficiency and reliability of simulations. These examples of development and applications of MD methods, such as first-principles calculations, force-field development, efficient algorithm implementation, and large-scale molecular simulations, have provided many valuable experiences in the dynamics and energetics of molecular systems. They have also provided specific new studies and results that are valuable to the communities of surface self

  6. Main characteristics of the Large Space Simulation (LSS)

    NASA Technical Reports Server (NTRS)

    Brinkmann, P. W.

    1984-01-01

    The large space simulator at its European space research and technology center (ESTEC) was implemented. The facility enables mechanical and thermal tests on large satellites. The chamber will be equipped with a collimated solar beam of 6 meter diameter. Infrared equipment is available as alternative or complementary source of thermal radiation. Controlled variation of shroud temperatures can be utilized for thermal testing or temperature cycling of hardware. The basic concept and major design aspects of the facility are presented.

  7. Computational fluid dynamics simulations of particle deposition in large-scale, multigenerational lung models.

    PubMed

    Walters, D Keith; Luke, William H

    2011-01-01

    Computational fluid dynamics (CFD) has emerged as a useful tool for the prediction of airflow and particle transport within the human lung airway. Several published studies have demonstrated the use of Eulerian finite-volume CFD simulations coupled with Lagrangian particle tracking methods to determine local and regional particle deposition rates in small subsections of the bronchopulmonary tree. However, the simulation of particle transport and deposition in large-scale models encompassing more than a few generations is less common, due in part to the sheer size and complexity of the human lung airway. Highly resolved, fully coupled flowfield solution and particle tracking in the entire lung, for example, is currently an intractable problem and will remain so for the foreseeable future. This paper adopts a previously reported methodology for simulating large-scale regions of the lung airway (Walters, D. K., and Luke, W. H., 2010, "A Method for Three-Dimensional Navier-Stokes Simulations of Large-Scale Regions of the Human Lung Airway," ASME J. Fluids Eng., 132(5), p. 051101), which was shown to produce results similar to fully resolved geometries using approximate, reduced geometry models. The methodology is extended here to particle transport and deposition simulations. Lagrangian particle tracking simulations are performed in combination with Eulerian simulations of the airflow in an idealized representation of the human lung airway tree. Results using the reduced models are compared with those using the fully resolved models for an eight-generation region of the conducting zone. The agreement between fully resolved and reduced geometry simulations indicates that the new method can provide an accurate alternative for large-scale CFD simulations while potentially reducing the computational cost of these simulations by several orders of magnitude.

  8. Science and engineering of large scale socio-technical simulations.

    SciTech Connect

    Barrett, C. L.; Eubank, S. G.; Marathe, M. V.; Mortveit, H. S.; Reidys, C. M.

    2001-01-01

    Computer simulation is a computational approach whereby global system properties are produced as dynamics by direct computation of interactions among representations of local system elements. A mathematical theory of simulation consists of an account of the formal properties of sequential evaluation and composition of interdependent local mappings. When certain local mappings and their interdependencies can be related to particular real world objects and interdependencies, it is common to compute the interactions to derive a symbolic model of the global system made up of the corresponding interdependent objects. The formal mathematical and computational account of the simulation provides a particular kind of theoretical explanation of the global system properties and, therefore, insight into how to engineer a complex system to exhibit those properties. This paper considers the methematical foundations and engineering princaples necessary for building large scale simulations of socio-technical systems. Examples of such systems are urban regional transportation systems, the national electrical power markets and grids, the world-wide Internet, vaccine design and deployment, theater war, etc. These systems are composed of large numbers of interacting human, physical and technological components. Some components adapt and learn, exhibit perception, interpretation, reasoning, deception, cooperation and noncooperation, and have economic motives as well as the usual physical properties of interaction. The systems themselves are large and the behavior of sociotechnical systems is tremendously complex. The state of affairs f o r these kinds of systems is characterized by very little satisfactory formal theory, a good decal of very specialized knowledge of subsystems, and a dependence on experience-based practitioners' art. However, these systems are vital and require policy, control, design, implementation and investment. Thus there is motivation to improve the ability to

  9. Mathematical simulation of power conditioning systems. Volume 1: Simulation of elementary units. Report on simulation methodology

    NASA Technical Reports Server (NTRS)

    Prajous, R.; Mazankine, J.; Ippolito, J. C.

    1978-01-01

    Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.

  10. Search for spin coupled WIMPs with the large volume NaI(Tl) scintillators

    NASA Astrophysics Data System (ADS)

    Yoshida, S.; Ejiri, H.; Fushimi, K.; Hayashi, K.; Kishimoto, T.; Kudomi, N.; Kume, K.; Kuramoto, H.; Matsuoka, K.; Ohsumi, H.; Takahisa, K.; Tsujimoto, Y.; Umehara, S.

    2001-06-01

    The cold dark matter search has been carried out at Oto Cosmo Observatory with the large volume NaI(Tl) scintillators of ELEGANT V. The new limits on WIMPs could be obtained by the analysis of the annual modulation. .

  11. Large-Scale Simulation of Nuclear Reactors: Issues and Perspectives

    SciTech Connect

    Merzari, Elia; Obabko, Aleks; Fischer, Paul; Halford, Noah; Walker, Justin; Siegel, Andrew; Yu, Yiqi

    2015-01-01

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems. These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. The focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.

  12. Numerical simulations of large-scale detonation tests in the RUT facility by the LES model.

    PubMed

    Zbikowski, Mateusz; Makarov, Dmitriy; Molkov, Vladimir

    2010-09-15

    The LES model based on the progress variable equation and the gradient method to simulate propagation of the reaction front within the detonation wave, which was recently verified by the ZND theory, is tested in this study against two large-scale experiments in the RUT facility. The facility was 27.6 m x 6.3 m x 6.55 m compartment with complex three-dimensional geometry. Experiments with 20% and 25.5% hydrogen-air mixture and different location of direct detonation initiation were simulated. Sensitivity of 3D simulations to control volume size and type were tested and found to be stringent compared to the planar detonation case. The maximum simulated pressure peak was found to be lower than the theoretical von Neumann spike value for the planar detonation and larger than the Chapman-Jouguet pressure thus indicating that it is more challenging to keep numerical reaction zone behind a leading front of numerical shock for curved fronts with large control volumes. The simulations demonstrated agreement with the experimental data. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  14. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, M.; Wieseman, C. D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  15. Toward the large-eddy simulation of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Speziale, C. G.; Zang, T. A.

    1990-01-01

    New subgrid-scale models for the large-eddy simulation of compressible turbulent flows are developed and tested based on the Favre-filtered equations of motion for an ideal gas. A compressible generalization of the linear combination of the Smagorinsky model and scale-similarity model, in terms of Favre-filtered fields, is obtained for the subgrid-scale stress tensor. An analogous thermal linear combination model is also developed for the subgrid-scale heat flux vector. The two dimensionless constants associated with these subgrid-scale models are obtained by correlating with the results of direct numerical simulations of compressible isotropic turbulence performed on a 96(exp 3) grid using Fourier collocation methods. Extensive comparisons between the direct and modeled subgrid-scale fields are provided in order to validate the models. A large-eddy simulation of the decay of compressible isotropic turbulence (conducted on a coarse 32(exp 3) grid) is shown to yield results that are in excellent agreement with the fine grid direct simulation. Future applications of these compressible subgrid-scale models to the large-eddy simulation of more complex supersonic flows are discussed briefly.

  16. Large eddy simulation of the atmosphere on various scales.

    PubMed

    Cullen, M J P; Brown, A R

    2009-07-28

    Numerical simulations of the atmosphere are routinely carried out on various scales for purposes ranging from weather forecasts for local areas a few hours ahead to forecasts of climate change over periods of hundreds of years. Almost without exception, these forecasts are made with space/time-averaged versions of the governing Navier-Stokes equations and laws of thermodynamics, together with additional terms representing internal and boundary forcing. The calculations are a form of large eddy modelling, because the subgrid-scale processes have to be modelled. In the global atmospheric models used for long-term predictions, the primary method is implicit large eddy modelling, using discretization to perform the averaging, supplemented by specialized subgrid models, where there is organized small-scale activity, such as in the lower boundary layer and near active convection. Smaller scale models used for local or short-range forecasts can use a much smaller averaging scale. This allows some of the specialized subgrid models to be dropped in favour of direct simulations. In research mode, the same models can be run as a conventional large eddy simulation only a few orders of magnitude away from a direct simulation. These simulations can then be used in the development of the subgrid models for coarser resolution models.

  17. Double-Auction Market Simulation Software for Very Large Classes

    ERIC Educational Resources Information Center

    Ironside, Brian; Joerding, Wayne; Kuzyk, Pat

    2004-01-01

    The authors provide a version of a double-auction market simulation designed for classes too large for most computer labs to accommodate in one sitting. Instead, students play the game from remote computers, wherever they may be and at any time during a given time period specified by the instructor. When the window of time expires, students can…

  18. NASA's Large-Eddy Simulation Research for Jet Noise Applications

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2009-01-01

    Research into large-eddy simulation (LES) for application to jet noise is described. The LES efforts include in-house code development and application at NASA Glenn along with NASA Research Announcement sponsored work at Stanford University and Florida State University. Details of the computational methods used and sample results for jet flows are provided.

  19. Mind the gap: a guideline for large eddy simulation.

    PubMed

    George, William K; Tutkun, Murat

    2009-07-28

    This paper briefly reviews some of the fundamental ideas of turbulence as they relate to large eddy simulation (LES). Of special interest is how our thinking about the so-called 'spectral gap' has evolved over the past decade, and what this evolution implies for LES applications.

  20. Toward the large-eddy simulations of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Speziale, C. G.; Zang, T. A.

    1987-01-01

    New subgrid-scale models for the large-eddy simulation of compressible turbulent flows are developed based on the Favre-filtered equations of motion for an ideal gas. A compressible generalization of the linear combination of the Smagorinsky model and scale-similarity model (in terms of Favre-filtered fields) is obtained for the subgrid-scale stress tensor. An analogous thermal linear combination model is also developed for the subgrid-scale heat flux vector. The three dimensionless constants associated with these subgrid-scale models are obtained by correlating with the results of direct numerical simulations of compressible isotropic turbulence performed on a 96 to the third power grid using Fourier collocation methods. Extensive comparisons between the direct and modeled subgrid-scale fields are provided in order to validate the models. Future applications of these compressible subgrid-scale models to the large-eddy simulation of supersonic aerodynamic flows are discussed briefly.

  1. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  2. Sand tank experiment of a large volume biodiesel spill

    NASA Astrophysics Data System (ADS)

    Scully, K.; Mayer, K. U.

    2015-12-01

    Although petroleum hydrocarbon releases in the subsurface have been well studied, the impacts of subsurface releases of highly degradable alternative fuels, including biodiesel, are not as well understood. One concern is the generation of CH4­ which may lead to explosive conditions in underground structures. In addition, the biodegradation of biodiesel consumes O2 that would otherwise be available for the degradation of petroleum hydrocarbons that may be present at a site. Until now, biodiesel biodegradation in the vadose zone has not been examined in detail, despite being critical to understanding the full impact of a release. This research involves a detailed study of a laboratory release of 80 L of biodiesel applied at surface into a large sandtank to examine the progress of biodegradation reactions. The experiment will monitor the onset and temporal evolution of CH4 generation to provide guidance for site monitoring needs following a biodiesel release to the subsurface. Three CO2 and CH4 flux chambers have been deployed for long term monitoring of gas emissions. CO2 fluxes have increased in all chambers over the 126 days since the start of the experiment. The highest CO2 effluxes are found directly above the spill and have increased from < 0.5 μmol m-2 s-1 to ~3.8 μmol m-2 s-1, indicating an increase in microbial activity. There were no measurable CH4 fluxes 126 days into the experiment. Sensors were emplaced to continuously measure O2, CO2, moisture content, matric potential, EC, and temperature. In response to the release, CO2 levels have increased across all sensors, from an average value of 0.1% to 0.6% 126 days after the start of the experiment, indicating the rapid onset of biodegradation. The highest CO2 values observed from samples taken in the gas ports were 2.5%. Average O2 concentrations have decreased from 21% to 17% 126 days after the start of the experiment. O2 levels in the bottom central region of the sandtank declined to approximately 12%.

  3. Large Variations in Ice Volume During the Middle Eocene "Doubthouse"

    NASA Astrophysics Data System (ADS)

    Dawber, C. F.; Tripati, A. K.

    2008-12-01

    The onset of glacial conditions in the Cenozoic is widely held to have begun ~34 million years ago, coincident with the Eocene-Oligocene boundary1. Warm and high pCO2 'greenhouse' intervals such as the Eocene are generally thought to be ice-free2. Yet the sequence stratigraphic record supports the occurrence of high-frequency sea-level change of tens of meters in the Middle and Late Eocene3, and large calcite and seawater δ18O excursions (~0.5-1.0 permil) have been reported in foraminifera from open ocean sediments4. As a result, the Middle Eocene is often considered the intermediary "doubthouse". The extent of continental ice during the 'doubthouse' is controversial, with estimates of glacioeustatic sea level fall ranging from 30 to 125m2,3,5. We present a new δ18Osw reconstruction for Ocean Drilling Project (ODP) Site 1209 in the tropical Pacific Ocean. It is the first continuous high-resolution record for an open-ocean site that is not directly influenced by changes in the carbonate compensation depth, which enables us to circumvent many of the limitations of existing records. Our record shows increases of 0.8 ± 0.2 (1 s.e) permil and 1.1 ± 0.2 permil at ~44-45 and ~42-41 Ma respectively, which suggests glacioeustatic sea level variations of ~90 m during the Middle Eocene. Modelling studies have shown that fully glaciating Antarctica during the Eocene should drive a change in seawater (δ18Osw) of 0.45 permil, and lower sea level by ~55 m6. Our results therefore support significant ice storage in both the Northern and Southern Hemisphere during the Middle Eocene 'doubthouse'. 1.Miller, Kenneth G. et al., 1990, Eocene-Oligocene sea-level changes in the New Jersey coastal plain linked to the deep-sea record. Geological Society of America Bulletin 102, 331-339 2.Pagani, M. et al., 2005, Marked decline in atmospheric carbon dioxide concentrations during the Paleogene. Science 309 (5734), 600-603. 3.Browning, J., Miller, K., and Pak, D., 1996, Global implications

  4. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks

    PubMed Central

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-01-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors’ best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well. PMID:27809285

  5. Design and Analysis of A Beacon-Less Routing Protocol for Large Volume Content Dissemination in Vehicular Ad Hoc Networks.

    PubMed

    Hu, Miao; Zhong, Zhangdui; Ni, Minming; Baiocchi, Andrea

    2016-11-01

    Large volume content dissemination is pursued by the growing number of high quality applications for Vehicular Ad hoc NETworks(VANETs), e.g., the live road surveillance service and the video-based overtaking assistant service. For the highly dynamical vehicular network topology, beacon-less routing protocols have been proven to be efficient in achieving a balance between the system performance and the control overhead. However, to the authors' best knowledge, the routing design for large volume content has not been well considered in the previous work, which will introduce new challenges, e.g., the enhanced connectivity requirement for a radio link. In this paper, a link Lifetime-aware Beacon-less Routing Protocol (LBRP) is designed for large volume content delivery in VANETs. Each vehicle makes the forwarding decision based on the message header information and its current state, including the speed and position information. A semi-Markov process analytical model is proposed to evaluate the expected delay in constructing one routing path for LBRP. Simulations show that the proposed LBRP scheme outperforms the traditional dissemination protocols in providing a low end-to-end delay. The analytical model is shown to exhibit a good match on the delay estimation with Monte Carlo simulations, as well.

  6. Statistics of LES Simulations of Large Wind Farms

    NASA Astrophysics Data System (ADS)

    Juhl Andersen, Søren; Nørkær Sørensen, Jens; Mikkelsen, Robert; Ivanell, Stefan

    2016-09-01

    Numerous large eddy simulations are performed of large wind farms using the actuator line method, which has been fully coupled to the aero-elastic code, Flex5. The higher order moments of the flow field inside large wind farms is examined in order to determine a representative reference velocity. The statistical moments appear to collapse and hence the turbulence inside large wind farms can potentially be scaled accordingly. The thrust coefficient is estimated by two different reference velocities and the generic CT expression by Frandsen. A reference velocity derived from the power production is shown to give very good agreement and furthermore enables the very good estimation of the thrust force using only the steady CT -curve, even for very short time samples. Finally, the effective turbulence inside large wind farms and the equivalent loads are examined.

  7. Large-Eddy Simulations of Flows in Complex Terrain

    NASA Astrophysics Data System (ADS)

    Kosovic, B.; Lundquist, K. A.

    2011-12-01

    Large-eddy simulation as a methodology for numerical simulation of turbulent flows was first developed to study turbulent flows in atmospheric by Lilly (1967). The first LES were carried by Deardorff (1970) who used these simulations to study atmospheric boundary layers. Ever since, LES has been extensively used to study canonical atmospheric boundary layers, in most cases flat plate boundary layers under the assumption of horizontal homogeneity. Carefully designed LES of canonical convective and neutrally stratified and more recently stably stratified atmospheric boundary layers have contributed significantly to development of better understanding of these flows and their parameterizations in large scale models. These simulations were often carried out using codes specifically designed and developed for large-eddy simulations of horizontally homogeneous flows with periodic lateral boundary conditions. Recent developments in multi-scale numerical simulations of atmospheric flows enable numerical weather prediction (NWP) codes such as ARPS (Chow and Street, 2009), COAMPS (Golaz et al., 2009) and Weather Research and Forecasting model, to be used nearly seamlessly across a wide range of atmospheric scales from synoptic down to turbulent scales in atmospheric boundary layers. Before we can with confidence carry out multi-scale simulations of atmospheric flows, NWP codes must be validated for accurate performance in simulating flows over complex or inhomogeneous terrain. We therefore carry out validation of WRF-LES for simulations of flows over complex terrain using data from Askervein Hill (Taylor and Teunissen, 1985, 1987) and METCRAX (Whiteman et al., 2008) field experiments. WRF's nesting capability is employed with a one-way nested inner domain that includes complex terrain representation while the coarser outer nest is used to spin up fully developed atmospheric boundary layer turbulence and thus represent accurately inflow to the inner domain. LES of a

  8. Plasma volume losses during simulated weightlessness in women

    SciTech Connect

    Drew, H.; Fortney, S.; La France, N.; Wagner, H.N. Jr.

    1985-05-01

    Six healthy women not using oral contraceptives underwent two 11-day intervals of complete bedrest (BR) with the BR periods separated by 4 weeks of ambulatory control. Change in plasma volume (PV) was monitored during BR to test the hypothesis that these women would show a smaller decrease in PV than PV values reported in similarly stressed men due to the water retaining effects of the female hormones. Bedrest periods were timed to coincide with opposing stages of the menstrual cycle in each woman. The menstrual cycle was divided into 4 separate stages; early follicular, ovulatory, early luteal, and late luteal phases. The percent decrease of PV showed a consistent decrease for each who began BR while in stage 1, 3 or 4 of the menstrual cycle. However, the females who began in stage 2 showed a transient attenuation in PV loss. Overall, PV changes seen in women during BR were similar to those reported for men. The water-retaining effects of menstrual hormones were evident only during the high estrogen ovulatory stage. The authors conclude the protective effects of menstrual hormones on PV losses during simulated weightless conditions appear to be only small and transient.

  9. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  10. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  11. Finecasting for renewable energy with large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Jonker, Harmen; Verzijlbergh, Remco

    2016-04-01

    We present results of a single, continuous Large-Eddy Simulation of actual weather conditions during the timespan of a full year, made possible through recent computational developments (Schalkwijk et al, MWR, 2015). The simulation is coupled to a regional weather model in order to provide an LES dataset that is representative of the daily weather of the year 2012 around Cabauw, the Netherlands. This location is chosen such that LES results can be compared with both the regional weather model and observations from the Cabauw observational supersite. The run was made possible by porting our Large-Eddy Simulation program to run completely on the GPU (Schalkwijk et al, BAMS, 2012). GPU adaptation allows us to reach much improved time-to-solution ratios (i.e. simulation speedup versus real time). As a result, one can perform runs with a much longer timespan than previously feasible. The dataset resulting from the LES run provides many avenues for further study. First, it can provide a more statistical approach to boundary-layer turbulence than the more common case-studies by simulating a diverse but representative set of situations, as well as the transition between situations. This has advantages in designing and evaluating parameterizations. In addition, we discuss the opportunities of high-resolution forecasts for the renewable energy sector, e.g. wind and solar energy production.

  12. Domain nesting for multi-scale large eddy simulation

    NASA Astrophysics Data System (ADS)

    Fuka, Vladimir; Xie, Zheng-Tong

    2016-04-01

    The need to simulate city scale areas (O(10 km)) with high resolution within street canyons in certain areas of interests necessitates different grid resolutions in different part of the simulated area. General purpose computational fluid dynamics codes typically employ unstructured refined grids while mesoscale meteorological models more often employ nesting of computational domains. ELMM is a large eddy simulation model for the atmospheric boundary layer. It employs orthogonal uniform grids and for this reason domain nesting was chosen as the approach for simulations in multiple scales. Domains are implemented as sets of MPI processes which communicate with each other as in a normal non-nested run, but also with processes from another (outer/inner) domain. It should stressed that the duration of solution of time-steps in the outer and in the inner domain must be synchronized, so that the processes do not have to wait for the completion of their boundary conditions. This can achieved by assigning an appropriate number of CPUs to each domain, and to gain high efficiency. When nesting is applied for large eddy simulation, the inner domain receives inflow boundary conditions which lack turbulent motions not represented by the outer grid. ELMM remedies this by optional adding of turbulent fluctuations to the inflow using the efficient method of Xie and Castro (2008). The spatial scale of these fluctuations is in the subgrid-scale of the outer grid and their intensity will be estimated from the subgrid turbulent kinetic energy in the outer grid.

  13. Publicly Releasing a Large Simulation Dataset with NDS Labs

    NASA Astrophysics Data System (ADS)

    Goldbaum, Nathan

    2016-03-01

    Optimally, all publicly funded research should be accompanied by the tools, code, and data necessary to fully reproduce the analysis performed in journal articles describing the research. This ideal can be difficult to attain, particularly when dealing with large (>10 TB) simulation datasets. In this lightning talk, we describe the process of publicly releasing a large simulation dataset to accompany the submission of a journal article. The simulation was performed using Enzo, an open source, community-developed N-body/hydrodynamics code and was analyzed using a wide range of community- developed tools in the scientific Python ecosystem. Although the simulation was performed and analyzed using an ecosystem of sustainably developed tools, we enable sustainable science using our data by making it publicly available. Combining the data release with the NDS Labs infrastructure allows a substantial amount of added value, including web-based access to analysis and visualization using the yt analysis package through an IPython notebook interface. In addition, we are able to accompany the paper submission to the arXiv preprint server with links to the raw simulation data as well as interactive real-time data visualizations that readers can explore on their own or share with colleagues during journal club discussions. It is our hope that the value added by these services will substantially increase the impact and readership of the paper.

  14. Toward large eddy simulation of turbulent flow over an airfoil

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon

    1993-01-01

    The flow field over an airfoil contains several distinct flow characteristics, e.g. laminar, transitional, turbulent boundary layer flow, flow separation, unstable free shear layers, and a wake. This diversity of flow regimes taxes the presently available Reynolds averaged turbulence models. Such models are generally tuned to predict a particular flow regime, and adjustments are necessary for the prediction of a different flow regime. Similar difficulties are likely to emerge when the large eddy simulation technique is applied with the widely used Smagorinsky model. This model has not been successful in correctly representing different turbulent flow fields with a single universal constant and has an incorrect near-wall behavior. Germano et al. (1991) and Ghosal, Lund & Moin have developed a new subgrid-scale model, the dynamic model, which is very promising in alleviating many of the persistent inadequacies of the Smagorinsky model: the model coefficient is computed dynamically as the calculation progresses rather than input a priori. The model has been remarkably successful in prediction of several turbulent and transitional flows. We plan to simulate turbulent flow over a '2D' airfoil using the large eddy simulation technique. Our primary objective is to assess the performance of the newly developed dynamic subgrid-scale model for computation of complex flows about aircraft components and to compare the results with those obtained using the Reynolds average approach and experiments. The present computation represents the first application of large eddy simulation to a flow of aeronautical interest and a key demonstration of the capabilities of the large eddy simulation technique.

  15. Model consistency in large eddy simulation of turbulent channel flows

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Ferziger, Joel H.; Moin, Parviz

    1988-01-01

    Combinations of filters and subgrid scale stress models for large eddy simulation of the Navier-Stokes equations are examined by a priori tests and numerical simulations. The structure of the subgrid scales is found to depend strongly on the type of filter used, and consistency between model and filter is essential to ensure accurate results. The implementation of consistent combinations of filter and model gives more accurate turbulence statistics than those obtained in previous investigations in which the models were chosen independently from the filter. Results and limitations of the a priori test are discussed. The effect of grid refinement is also examined.

  16. Large-eddy simulation of turbulent sheared convection

    NASA Astrophysics Data System (ADS)

    Sykes, R. I.; Henn, D. S.

    1989-04-01

    A series of large-eddy simulations of free and sheared convective flow between moving flat plates is presented. Results for free convection are compared with laboratory data. The ratio of friction velocity to the convective velocity scale is identified as an important parameter in sheared convective flow, determining the formation of longitudinal rolls. Rolls are found for ratios greater than 0.35, with aspect ratio decreasing as this parameter increases. It is shown that, in this regime, two-dimensional simulations with a proper choice of roll orientation and turbulence length-scale can produce correct velocity variances and roll aspect ratio.

  17. Laminar flow transition: A large-eddy simulation approach

    NASA Technical Reports Server (NTRS)

    Biringen, S.

    1982-01-01

    A vectorized, semi-implicit code was developed for the solution of the time-dependent, three dimensional equations of motion in plane Poiseuille flow by the large-eddy simulation technique. The code is tested by comparing results with those obtained from the solutions of the Orr-Sommerfeld equation. Comparisons indicate that finite-differences employed along the cross-stream direction act as an implicit filter. This removes the necessity of explicit filtering along this direction (where a nonhomogeneous mesh is used) for the simulation of laminar flow transition into turbulence in which small scale turbulence will be accounted for by a subgrid scale turbulence model.

  18. Large-eddy simulation of trans- and supercritical injection

    NASA Astrophysics Data System (ADS)

    Müller, H.; Niedermeier, C. A.; Jarczyk, M.; Pfitzner, M.; Hickel, S.; Adams, N. A.

    2016-07-01

    In a joint effort to develop a robust numerical tool for the simulation of injection, mixing, and combustion in liquid rocket engines at high pressure, a real-gas thermodynamics model has been implemented into two computational fluid dynamics (CFD) codes, the density-based INCA and a pressure-based version of OpenFOAM. As a part of the validation process, both codes have been used to perform large-eddy simulations (LES) of trans- and supercritical nitrogen injection. Despite the different code architecture and the different subgrid scale turbulence modeling strategy, both codes yield similar results. The agreement with the available experimental data is good.

  19. Model consistency in large eddy simulation of turbulent channel flows

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Ferziger, Joel H.; Moin, Parviz

    1988-01-01

    Combinations of filters and subgrid scale stress models for large eddy simulation of the Navier-Stokes equations are examined by a priori tests and numerical simulations. The structure of the subgrid scales is found to depend strongly on the type of filter used, and consistency between model and filter is essential to ensure accurate results. The implementation of consistent combinations of filter and model gives more accurate turbulence statistics than those obtained in previous investigations in which the models were chosen independently from the filter. Results and limitations of the a priori test are discussed. The effect of grid refinement is also examined.

  20. Simulating the large-scale structure of HI intensity maps

    SciTech Connect

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel E-mail: aseem@iucaa.in E-mail: alexandre.refregier@phys.ethz.ch E-mail: joel.akeret@phys.ethz.ch

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 2048{sup 3} particles (particle mass 1.6 × 10{sup 11} M{sub ⊙} / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (10{sup 8} M{sub ⊙} / h < M{sub halo} < 10{sup 13} M{sub ⊙} / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 ∼< z ∼< 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  1. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  2. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    DOE PAGES

    Merzari, Elia; Obabko, Aleks; Fischer, Paul; ...

    2016-11-03

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less

  3. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    SciTech Connect

    Merzari, Elia; Obabko, Aleks; Fischer, Paul; Halford, Noah; Walker, Justin; Siegel, Andrew; Yu, Yiqi

    2016-11-03

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems. These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.

  4. Large-scale microstructural simulation of load-adaptive bone remodeling in whole human vertebrae.

    PubMed

    Badilatti, Sandro D; Christen, Patrik; Levchuk, Alina; Marangalou, Javad Hazrati; van Rietbergen, Bert; Parkinson, Ian; Müller, Ralph

    2016-02-01

    Identification of individuals at risk of bone fractures remains challenging despite recent advances in bone strength assessment. In particular, the future degradation of the microstructure and load adaptation has been disregarded. Bone remodeling simulations have so far been restricted to small-volume samples. Here, we present a large-scale framework for predicting microstructural adaptation in whole human vertebrae. The load-adaptive bone remodeling simulations include estimations of appropriate bone loading of three load cases as boundary conditions with microfinite element analysis. Homeostatic adaptation of whole human vertebrae over a simulated period of 10 years is achieved with changes in bone volume fraction (BV/TV) of less than 5%. Evaluation on subvolumes shows that simplifying boundary conditions reduces the ability of the system to maintain trabecular structures when keeping remodeling parameters unchanged. By rotating the loading direction, adaptation toward new loading conditions could be induced. This framework shows the possibility of using large-scale bone remodeling simulations toward a more accurate prediction of microstructural changes in whole human bones.

  5. Center-stabilized Yang-Mills Theory:Confinement and Large N Volume Independence

    SciTech Connect

    Unsal, Mithat; Yaffe, Laurence G.; /Washington U., Seattle

    2008-03-21

    We examine a double trace deformation of SU(N) Yang-Mills theory which, for large N and large volume, is equivalent to unmodified Yang-Mills theory up to O(1/N{sup 2}) corrections. In contrast to the unmodified theory, large N volume independence is valid in the deformed theory down to arbitrarily small volumes. The double trace deformation prevents the spontaneous breaking of center symmetry which would otherwise disrupt large N volume independence in small volumes. For small values of N, if the theory is formulated on R{sup 3} x S{sup 1} with a sufficiently small compactification size L, then an analytic treatment of the non-perturbative dynamics of the deformed theory is possible. In this regime, we show that the deformed Yang-Mills theory has a mass gap and exhibits linear confinement. Increasing the circumference L or number of colors N decreases the separation of scales on which the analytic treatment relies. However, there are no order parameters which distinguish the small and large radius regimes. Consequently, for small N the deformed theory provides a novel example of a locally four-dimensional pure gauge theory in which one has analytic control over confinement, while for large N it provides a simple fully reduced model for Yang-Mills theory. The construction is easily generalized to QCD and other QCD-like theories.

  6. Process control of large-scale finite element simulation software

    SciTech Connect

    Spence, P.A.; Weingarten, L.I.; Schroder, K.; Tung, D.M.; Sheaffer, D.A.

    1996-02-01

    We have developed a methodology for coupling large-scale numerical codes with process control algorithms. Closed-loop simulations were demonstrated using the Sandia-developed finite element thermal code TACO and the commercially available finite element thermal-mechanical code ABAQUS. This new capability enables us to use computational simulations for designing and prototyping advanced process-control systems. By testing control algorithms on simulators before building and testing hardware, enormous time and cost savings can be realized. The need for a closed-loop simulation capability was demonstrated in a detailed design study of a rapid-thermal-processing reactor under development by CVC Products Inc. Using a thermal model of the RTP system as a surrogate for the actual hardware, we were able to generate response data needed for controller design. We then evaluated the performance of both the controller design and the hardware design by using the controller to drive the finite element model. The controlled simulations provided data on wafer temperature uniformity as a function of ramp rate, temperature sensor locations, and controller gain. This information, which is critical to reactor design, cannot be obtained from typical open-loop simulations.

  7. Two-fluid biasing simulations of the large plasma device

    NASA Astrophysics Data System (ADS)

    Fisher, Dustin M.; Rogers, Barrett N.

    2017-02-01

    External biasing of the Large Plasma Device (LAPD) and its impact on plasma flows and turbulence are explored for the first time in 3D simulations using the Global Braginskii Solver code. Without external biasing, the LAPD plasma spontaneously rotates in the ion diamagnetic direction. The application of a positive bias increases the plasma rotation in the simulations, which show the emergence of a coherent Kelvin Helmholtz (KH) mode outside of the cathode edge with poloidal mode number m ≃ 6 . Negative biasing reduces the rotation in the simulations, which exhibit KH turbulence modestly weaker than but otherwise similar to unbiased simulations. Biasing either way, but especially positively, forces the plasma potential inside the cathode edge to a spatially constant, KH-stable profile, leading to a more quiescent core plasma than the unbiased case. A moderate increase in plasma confinement and an associated steepening of the profiles are seen in the biasing runs. The simulations thus show that the application of external biasing can improve confinement while also driving a Kelvin-Helmholtz instability. Ion-neutral collisions have only a weak effect in the biased or unbiased simulations.

  8. Large woody debris in a second-growth central Appalachian hardwood stand: volume, composition, and dynamics

    Treesearch

    M. B. Adams; T. M. Schuler; W. M. Ford; J. N. Kochenderfer

    2003-01-01

    We estimated the volume of large woody debris in a second-growth stand and evaluated the importance of periodic windstorms as disturbances in creating large woody debris. This research was conducted on a reference watershed (Watershed 4) on the Fernow Experimental Forest in West Virginia. The 38-ha stand on Watershed 4 was clearcut around 1911 and has been undisturbed...

  9. Large-eddy simulation using the finite element method

    SciTech Connect

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.; Kollmann, W.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulence modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.

  10. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    NASA Technical Reports Server (NTRS)

    Oefelein, Joseph C.; Garcia, Roberto (Technical Monitor)

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  11. Large Eddy Simulations of Severe Convection Induced Turbulence

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.

  12. Lightweight computational steering of very large scale molecular dynamics simulations

    SciTech Connect

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

  13. Large eddy simulation of a wing-body junction flow

    NASA Astrophysics Data System (ADS)

    Ryu, Sungmin; Emory, Michael; Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca

    2014-11-01

    We present numerical simulations of the wing-body junction flow experimentally investigated by Devenport & Simpson (1990). Wall-junction flows are common in engineering applications but relevant flow physics close to the corner region is not well understood. Moreover, performance of turbulence models for the body-junction case is not well characterized. Motivated by the insufficient investigations, we have numerically investigated the case with Reynolds-averaged Naiver-Stokes equation (RANS) and Large Eddy Simulation (LES) approaches. The Vreman model applied for the LES and SST k- ω model for the RANS simulation are validated focusing on the ability to predict turbulence statistics near the junction region. Moreover, a sensitivity study of the form of the Vreman model will also be presented. This work is funded under NASA Cooperative Agreement NNX11AI41A (Technical Monitor Dr. Stephen Woodruff)

  14. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  15. LPI Simulations over an entire speckle volume with the PIC code Z3

    NASA Astrophysics Data System (ADS)

    Still, C. H.; Lasinski, B. F.; Langdon, A. B.

    2002-11-01

    The 2-D particle-in-cell code Zohar has long been a primary tool for modeling parametric laser-plasma instabilities. The latest incarnation of this tool is a 3-D fully kinetic massively parallel code, dubbed Z3. As a capability demonstration, we used Z3 on 512 processors of the ASCI White machine, to perform a milestone calculation, simulating an entire f/4 speckle volume (25λ0 × 25 λ0 × 153 λ_0) for a 3ω gaussian beam at 7 × 10^16 W/cm^2. We present results including evidence of vigorous Raman scatter in the forward and near forward directions at high T_e. We discuss the calculation and challenges inherent in performing such a large simulation (3.5 × 10^8 cells, 7.6 × 10^9 particles). We compare the results obtained in 3-D with 1-D and 2-D calculations.

  16. A high resolution finite volume method for efficient parallel simulation of casting processes on unstructured meshes

    SciTech Connect

    Kothe, D.B.; Turner, J.A.; Mosso, S.J.; Ferrell, R.C.

    1997-03-01

    We discuss selected aspects of a new parallel three-dimensional (3-D) computational tool for the unstructured mesh simulation of Los Alamos National Laboratory (LANL) casting processes. This tool, known as {bold Telluride}, draws upon on robust, high resolution finite volume solutions of metal alloy mass, momentum, and enthalpy conservation equations to model the filling, cooling, and solidification of LANL castings. We briefly describe the current {bold Telluride} physical models and solution methods, then detail our parallelization strategy as implemented with Fortran 90 (F90). This strategy has yielded straightforward and efficient parallelization on distributed and shared memory architectures, aided in large part by new parallel libraries {bold JTpack9O} for Krylov-subspace iterative solution methods and {bold PGSLib} for efficient gather/scatter operations. We illustrate our methodology and current capabilities with source code examples and parallel efficiency results for a LANL casting simulation.

  17. Evaluation of Cloud, Grid and HPC resources for big volume and variety of RCM simulations

    NASA Astrophysics Data System (ADS)

    Blanco, Carlos; Cofino, Antonio S.; Fernández, Valvanuz; Fernández, Jesús

    2016-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Regional Climate Model (RCM) community. These paradigms are modifying the way how RCM applications are being executed. By using these technologies the number, variety and complexity of experiments and resources used by RCMs simulations are increasing substantially. But, although computational capacity is increasing, traditional apps and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to execute RCMs in Grid, Cloud and HPC resources and how to tackle them. For this purpose, WRF model will be used as well known representative application for RCM simulations. Grid and Cloud infrastructures provided by EGI's VOs (esr, earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. And as a solution to those challenges we will use the WRF4G framework, which provides a good framework to manage big volume and variety of computing resources for climate simulation experiments. This work is partially funded by "Programa de Personal Investigador en Formación Predoctoral" from Universidad de Cantabria, co-funded by the Regional Government of Cantabria.

  18. Biofidelic Human Activity Modeling and Simulation with Large Variability

    DTIC Science & Technology

    2014-11-25

    AFRL- RH -WP-TR-2014-0137 BIOFIDELIC HUMAN ACTIVITY MODELING AND SIMULATION WITH LARGE VARIABILITY John Camp Darrell Lochtefeld...obtained from the Defense Technical Information Center (DTIC) (http://www.dtic.mil). AFRL- RH -WP-TR-2014-0137 HAS BEEN REVIEWED AND IS APPROVED FOR...Branch Wright-Patterson Air Force Base, OH 45433   11. SPONSORING/MONITORING AGENCY REPORT NUMBER(S) AFRL- RH -WP-TR-2014-0137 12. DISTRIBUTION

  19. Large-Eddy Simulations of Dust Devils and Convective Vortices

    NASA Astrophysics Data System (ADS)

    Spiga, Aymeric; Barth, Erika; Gu, Zhaolin; Hoffmann, Fabian; Ito, Junshi; Jemmett-Smith, Bradley; Klose, Martina; Nishizawa, Seiya; Raasch, Siegfried; Rafkin, Scot; Takemi, Tetsuya; Tyler, Daniel; Wei, Wei

    2016-11-01

    In this review, we address the use of numerical computations called Large-Eddy Simulations (LES) to study dust devils, and the more general class of atmospheric phenomena they belong to (convective vortices). We describe the main elements of the LES methodology. We review the properties, statistics, and variability of dust devils and convective vortices resolved by LES in both terrestrial and Martian environments. The current challenges faced by modelers using LES for dust devils are also discussed in detail.

  20. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  1. Simulation requirements for the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Soosaar, K.

    1984-01-01

    Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.

  2. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  3. Cosmological fluid mechanics with adaptively refined large eddy simulations

    NASA Astrophysics Data System (ADS)

    Schmidt, W.; Almgren, A. S.; Braun, H.; Engels, J. F.; Niemeyer, J. C.; Schulz, J.; Mekuria, R. R.; Aspden, A. J.; Bell, J. B.

    2014-06-01

    We investigate turbulence generated by cosmological structure formation by means of large eddy simulations using adaptive mesh refinement. In contrast to the widely used implicit large eddy simulations, which resolve a limited range of length-scales and treat the effect of turbulent velocity fluctuations below the grid scale solely by numerical dissipation, we apply a subgrid-scale model for the numerically unresolved fraction of the turbulence energy. For simulations with adaptive mesh refinement, we utilize a new methodology that allows us to adjust the scale-dependent energy variables in such a way that the sum of resolved and unresolved energies is globally conserved. We test our approach in simulations of randomly forced turbulence, a gravitationally bound cloud in a wind, and the Santa Barbara cluster. To treat inhomogeneous turbulence, we introduce an adaptive Kalman filtering technique that separates turbulent velocity fluctuations on resolved length-scales from the non-turbulent bulk flow. From the magnitude of the fluctuating component and the subgrid-scale turbulence energy, a total turbulent velocity dispersion of several 100 km s-1 is obtained for the Santa Barbara cluster, while the low-density gas outside the accretion shocks is nearly devoid of turbulence. The energy flux through the turbulent cascade and the dissipation rate predicted by the subgrid-scale model correspond to dynamical time-scales around 5 Gyr, independent of numerical resolution.

  4. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Technical Reports Server (NTRS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-01-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an

  5. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach.

    PubMed

    Zeng, Xiaozheng; McGough, Robert J

    2009-05-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters.

  6. Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach

    NASA Technical Reports Server (NTRS)

    Menon, S.; Feiz, H.

    1990-01-01

    Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.

  7. Parallel cluster labeling for large-scale Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Flanigan, M.; Tamayo, P.

    1995-02-01

    We present an optimized version of a cluster labeling algorithm previously introduced by the authors. This algorithm is well suited for large-scale Monte Carlo simulations of spin models using cluster dynamics on parallel computers with large numbers of processors. The algorithm divides physical space into rectangular cells which are assigned to processors and combines a serial local labeling procedure with a relaxation process across nearest-neighbor processors. By controlling overhead and reducing inter-processor communication this method attains good computational speed-up and efficiency. Large systems of up to 65536 2 spins have been simulated at updating speeds of 11 nanosecs/site (90.7 × 10 6 spin updates/sec) using state-of-the-art supercomputers. In the second part of the article we use the cluster algorithm to study the relaxation of magnetization and energy on large Ising models using Swendsen-Wang dynamics. We found evidence that exponential and power law factors are present in the relaxation process as has been proposed by Hackl et al. The variation of the power-law exponent λM taken at face value indicates that the value of ZM falls in the interval 0.31-0.49 for the time interval analysed and appears to vanish asymptotically.

  8. Direct Molecular Simulation of Gradient-Driven Diffusion of Large Molecules using Constant Pressure

    SciTech Connect

    Heffelfinger, G.S.; Thompson, A.P.

    1998-12-23

    Dual control volume grand canonical molecular dynamics (DCV-GCMD) is a boundary-driven non-equilibrium molecular dynamics technique for simulating gradient driven diffusion in multi-component systems. Two control volumes are established at opposite ends of the simulation box. Constant temperature and chemical potential of diffusing species are imposed in the control volumes. This results in stable chemical potential gradients and steady-state diffusion fluxes in the region between the control volumes. We present results and detailed analysis for a new constant-pressure variant of the DCV-GCMD method in which one of the diffusing species for which a steady-state diffusion flux exists does not have to be inserted or deIeted. Constant temperature, pressure and chemical potential of all diffusing species except one are imposed in the control volumes. The constant-pressure method can be applied to situations in which insertion and deletion of large molecules would be prohibitively difficult. As an exampIe, we used the method to shnulate diffusion in a biruuy mixture of spherical particles with a 2:1 size ratio. Steady-state diffusion fluxes of both diffbsi.ng species were established. The constant-pressure diffision coefficients agreed closely with the results of the standard constant-volume calculations. In addition, we show how the concentration, chemical potential and flux profiles can be used to calculate kwd binary and Maxwell-Stefim diffusion coefficients. In the case of the 2:1 size ratio mixture, we found that the binary dlffision coefficients were asymmetric and composition dependent, whereas the Maxwell-Stefan diffision coefficients changed very little with composition and were symmetric. This last result verified that the Gibbs-Duhem relation was satisfied locally, thus validating the assumption of local equilibrium.

  9. Large eddy simulation of a high aspect ratio combustor

    NASA Astrophysics Data System (ADS)

    Kirtas, Mehmet

    The present research investigates the details of mixture preparation and combustion in a two-stroke, small-scale research engine with a numerical methodology based on large eddy simulation (LES) technique. A major motivation to study such small-scale engines is their potential use in applications requiring portable power sources with high power density. The investigated research engine has a rectangular planform with a thickness very close to quenching limits of typical hydrocarbon fuels. As such, the combustor has a high aspect ratio (defined as the ratio of surface area to volume) that makes it different than the conventional engines which typically have small aspect ratios to avoid intense heat losses from the combustor in the bulk flame propagation period. In most other aspects, this engine involves all the main characteristics of traditional reciprocating engines. A previous experimental work has identified some major design problems and demonstrated the feasibility of cyclic combustion in the high aspect ratio combustor. Because of the difficulty of carrying out experimental studies in such small devices, resolving all flow structures and completely characterizing the flame propagation have been an enormously challenging task. The numerical methodology developed in this work attempts to complement these previous studies by providing a complete evolution of flow variables. Results of the present study demonstrated strengths of the proposed methodology in revealing physical processes occuring in a typical operation of the high aspect ratio combustor. For example, in the scavenging phase, the dominant flow structure is a tumble vortex that forms due to the high velocity reactant jet (premixed) interacting with the walls of the combustor. Since the scavenging phase is a long process (about three quarters of the whole cycle), the impact of the vortex is substantial on mixture preparation for the next combustion phase. LES gives the complete evolution of this flow

  10. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  11. Exposing earth surface process model simulations to a large audience

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical

  12. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  13. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    SciTech Connect

    Zang, Tianwu; Yu, Linglin; Zhang, Chong; Ma, Jianpeng

    2014-07-28

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  14. Refined universal laws for hull volumes and perimeters in large planar maps

    NASA Astrophysics Data System (ADS)

    Guitter, Emmanuel

    2017-07-01

    We consider ensembles of planar maps with two marked vertices at distance k from each other, and look at the closed line separating these vertices and lying at distance d from the first one (d  <  k). This line divides the map into two components, the hull at distance d which corresponds to the part of the map lying on the same side as the first vertex and its complementary. The number of faces within the hull is called the hull volume, and the length of the separating line the hull perimeter. We study the statistics of the hull volume and perimeter for arbitrary d and k in the limit of infinitely large planar quadrangulations, triangulations and Eulerian triangulations. We consider more precisely situations where both d and k become large with the ratio d/k remaining finite. For infinitely large maps, two regimes may be encountered: either the hull has a finite volume and its complementary is infinitely large, or the hull itself has an infinite volume and its complementary is of finite size. We compute the probability for the map to be in either regime as a function of d/k as well as a number of universal statistical laws for the hull perimeter and volume when maps are conditioned to be in one regime or the other.

  15. Large-scale lattice-Boltzmann simulations over lambda networks

    NASA Astrophysics Data System (ADS)

    Saksena, R.; Coveney, P. V.; Pinning, R.; Booth, S.

    Amphiphilic molecules are of immense industrial importance, mainly due to their tendency to align at interfaces in a solution of immiscible species, e.g., oil and water, thereby reducing surface tension. Depending on the concentration of amphiphiles in the solution, they may assemble into a variety of morphologies, such as lamellae, micelles, sponge and cubic bicontinuous structures exhibiting non-trivial rheological properties. The main objective of this work is to study the rheological properties of very large, defect-containing gyroidal systems (of up to 10243 lattice sites) using the lattice-Boltzmann method. Memory requirements for the simulation of such large lattices exceed that available to us on most supercomputers and so we use MPICH-G2/MPIg to investigate geographically distributed domain decomposition simulations across HPCx in the UK and TeraGrid in the US. Use of MPICH-G2/MPIg requires the port-forwarder to work with the grid middleware on HPCx. Data from the simulations is streamed to a high performance visualisation resource at UCL (London) for rendering and visualisation. Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project March 26-28 2007 The George Hotel, Edinburgh, UK

  16. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  17. Large Eddy Simulation of High-Speed, Premixed Ethylene Combustion

    NASA Technical Reports Server (NTRS)

    Ramesh, Kiran; Edwards, Jack R.; Chelliah, Harsha; Goyne, Christopher; McDaniel, James; Rockwell, Robert; Kirik, Justin; Cutler, Andrew; Danehy, Paul

    2015-01-01

    A large-eddy simulation / Reynolds-averaged Navier-Stokes (LES/RANS) methodology is used to simulate premixed ethylene-air combustion in a model scramjet designed for dual mode operation and equipped with a cavity for flameholding. A 22-species reduced mechanism for ethylene-air combustion is employed, and the calculations are performed on a mesh containing 93 million cells. Fuel plumes injected at the isolator entrance are processed by the isolator shock train, yielding a premixed fuel-air mixture at an equivalence ratio of 0.42 at the cavity entrance plane. A premixed flame is anchored within the cavity and propagates toward the opposite wall. Near complete combustion of ethylene is obtained. The combustor is highly dynamic, exhibiting a large-scale oscillation in global heat release and mass flow rate with a period of about 2.8 ms. Maximum heat release occurs when the flame front reaches its most downstream extent, as the flame surface area is larger. Minimum heat release is associated with flame propagation toward the cavity and occurs through a reduction in core flow velocity that is correlated with an upstream movement of the shock train. Reasonable agreement between simulation results and available wall pressure, particle image velocimetry, and OH-PLIF data is obtained, but it is not yet clear whether the system-level oscillations seen in the calculations are actually present in the experiment.

  18. Simulation of large-scale multitarget tracking scenarios using GPUs

    NASA Astrophysics Data System (ADS)

    Dinath, Yusuf; Tharmarasa, R.; Meger, Eric; Valin, Pierre; Kirubarajan, T.

    2012-06-01

    The increased availability of Graphical Processing Units (GPUs) in personal computers has made parallel pro- gramming worthwhile, but not necessarily easier. This paper will take advantage of the power of a GPU, in conjunction with the Central Processing Unit (CPU), in order to simulate target trajectories for large-scale scenarios, such as wide-area maritime or ground surveillance. The idea is to simulate the motion of tens of thousands of targets using a GPU by formulating an optimization problem that maximizes the throughput. To do this, the proposed algorithm is provided with input data that describes how the targets are expected to behave, path information (e.g., roadmaps, shipping lanes), and available computational resources. Then, it is possible to break down the algorithm into parts that are done in the CPU versus those sent to the GPU. The ultimate goal is to compare processing times of the algorithm with a GPU in conjunction with a CPU to those of the standard algorithms running on the CPU alone. In this paper, the optimization formulation for utilizing the GPU, simulation results on scenarios with a large number of targets and conclusions are provided.

  19. Large meteoroid's impact damage: review of available impact hazard simulators

    NASA Astrophysics Data System (ADS)

    Moreno-Ibáñez, M.; Gritsevich, M.; Trigo-Rodríguez, J. M.

    2016-01-01

    The damage caused by meter-sized meteoroids encountering the Earth is expected to be severe. Meteor-sized objects in heliocentric orbits can release energies higher than 108 J either in the upper atmosphere through an energetic airblast or, if reaching the surface, their impact may create a crater, provoke an earthquake or start up a tsunami. A limited variety of cases has been observed in the recent past (e.g. Tunguska, Carancas or Chelyabinsk). Hence, our knowledge has to be constrained with the help of theoretical studies and numerical simulations. There are several simulation programs which aim to forecast the impact consequences of such events. We have tested them using the recent case of the Chelyabinsk superbolide. Particularly, Chelyabinsk belongs to the ten to hundred meter-sized objects which constitute the main source of risk to Earth given the current difficulty in detecting them in advance. Furthermore, it was a detailed documented case, thus allowing us to properly check the accuracy of the studied simulators. As we present, these open simulators provide a first approximation of the impact consequences. However, all of them fail to accurately determine the caused damage. We explain the observed discrepancies between the observed and simulated consequences with the following consideration. The large amount of unknown properties of the potential impacting meteoroid, the atmospheric conditions, the flight dynamics and the uncertainty in the impact point itself hinder any modelling task. This difficulty can be partially overcome by reducing the number of unknowns using dimensional analysis and scaling laws. Despite the description of physical processes associated with atmospheric entry could be still further improved, we conclude that such approach would significantly improve the efficiency of the simulators.

  20. Low energy prompt gamma-ray tests of a large volume BGO detector.

    PubMed

    Naqvi, A A; Kalakada, Zameer; Al-Anezi, M S; Raashid, M; Khateeb-ur-Rehman; Maslehuddin, M; Garwan, M A

    2012-01-01

    Tests of a large volume Bismuth Germinate (BGO) detector were carried out to detect low energy prompt gamma-rays from boron and cadmium-contaminated water samples using a portable neutron generator-based Prompt Gamma Neutron Activation Analysis (PGNAA) setup. Inspite of strong interference between the sample- and the detector-associated prompt gamma-rays, an excellent agreement has been observed between the experimental and calculated yields of the prompt gamma-rays, indicating successful application of the large volume BGO detector in the PGNAA analysis of bulk samples using low energy prompt gamma-rays. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, Gerald D.

    1998-01-01

    Microwave injection methods for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant "volume" ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources.

  2. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, G.D.

    1998-11-24

    Microwave injection methods are disclosed for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant ``volume`` ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources. 5 figs.

  3. Large-volume en-bloc staining for electron microscopy-based connectomics

    PubMed Central

    Hua, Yunfeng; Laserstein, Philip; Helmstaedter, Moritz

    2015-01-01

    Large-scale connectomics requires dense staining of neuronal tissue blocks for electron microscopy (EM). Here we report a large-volume dense en-bloc EM staining protocol that overcomes the staining gradients, which so far substantially limited the reconstructable volumes in three-dimensional (3D) EM. Our protocol provides densely reconstructable tissue blocks from mouse neocortex sized at least 1 mm in diameter. By relaxing the constraints on precise topographic sample targeting, it makes the correlated functional and structural analysis of neuronal circuits realistic. PMID:26235643

  4. Large breast compressions: Observations and evaluation of simulations

    SciTech Connect

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J.

    2011-02-15

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  5. Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses

    NASA Astrophysics Data System (ADS)

    Rutkowska, Eva; Baker, Colin; Nahum, Alan

    2010-04-01

    A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.

  6. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  7. Pulsar simulations for the Fermi Large Area Telescope

    DOE PAGES

    Razzano, M.; Harding, Alice K.; Baldini, L.; ...

    2009-05-21

    Pulsars are among the prime targets for the Large Area Telescope (LAT) aboard the recently launched Fermi observatory. The LAT will study the gamma-ray Universe between 20 MeV and 300 GeV with unprecedented detail. Increasing numbers of gamma-ray pulsars are being firmly identified, yet their emission mechanisms are far from being understood. To better investigate and exploit the LAT capabilities for pulsar science, a set of new detailed pulsar simulation tools have been developed within the LAT collaboration. The structure of the pulsar simulator package (PulsarSpectrum) is presented here. Starting from photon distributions in energy and phase obtained from theoreticalmore » calculations or phenomenological considerations, gamma-rays are generated and their arrival times at the spacecraft are determined by taking into account effects such as barycentric effects and timing noise. Pulsars in binary systems also can be simulated given orbital parameters. As a result, we present how simulations can be used for generating a realistic set of gamma-rays as observed by the LAT, focusing on some case studies that show the performance of the LAT for pulsar observations.« less

  8. Pulsar Simulations for the Fermi Large Area Telescope

    NASA Technical Reports Server (NTRS)

    Razzano, M.; Harding, A. K.; Baldini, L.; Bellazzini, R.; Bregeon, J.; Burnett, T.; Chiang, J.; Digel, S. W.; Dubois, R.; Kuss, M. W.; hide

    2009-01-01

    Pulsars are among the prime targets for the Large Area Telescope (LAT) aboard the recently launched Fermi observatory. The LAT will study the gamma-ray Universe between 20 MeV and 300 GeV with unprecedented detail. Increasing numbers of gamma-ray pulsars are being firmly identified, yet their emission mechanisms are far from being understood. To better investigate and exploit the tAT capabilities for pulsar science. a set of new detailed pulsar simulation tools have been developed within the LAT collaboration. The structure of the pulsar simulator package (PulsarSpeccrum) is presented here. Starting from photon distributions in energy and phase obtained from theoretical calculations or phenomenological considerations, gamma-rays are generated and their arrival times at the spacecraft are determined by taking Into account effects such as barycentric effects and timing noise. Pulsars in binary systems also can be simulated given orbital parameters. We present how simulations can be used for generating a realistic set of gamma rays as observed by the LAT, focusing on some case studies that show the performance of the LAT for pulsar observations.

  9. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  10. Large Eddy Simulation of a Cavitating Multiphase Flow for Liquid Injection

    NASA Astrophysics Data System (ADS)

    Cailloux, M.; Helie, J.; Reveillon, J.; Demoulin, F. X.

    2015-12-01

    This paper presents a numerical method for modelling a compressible multiphase flow that involves phase transition between liquid and vapour in the context of gasoline injection. A discontinuous compressible two fluid mixture based on the Volume of Fluid (VOF) implementation is employed to represent the phases of liquid, vapour and air. The mass transfer between phases is modelled by standard models such as Kunz or Schnerr-Sauer but including the presence of air in the gas phase. Turbulence is modelled using a Large Eddy Simulation (LES) approach to catch instationnarities and coherent structures. Eventually the modelling approach matches favourably experimental data concerning the effect of cavitation on atomisation process.

  11. Numerical methods for large eddy simulation of acoustic combustion instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton T.

    Acoustic combustion instabilities occur when interaction between the combustion process and acoustic modes in a combustor results in periodic oscillations in pressure, velocity, and heat release. If sufficiently large in amplitude, these instabilities can cause operational difficulties or the failure of combustor hardware. In many situations, the dominant instability is the result of the interaction between a low frequency acoustic mode of the combustor and the large scale hydrodynamics. Large eddy simulation (LES), therefore, is a promising tool for the prediction of these instabilities, since both the low frequency acoustic modes and the large scale hydrodynamics are well resolved in LES. Problems with the tractability of such simulations arise, however, due to the difficulty of solving the compressible Navier-Stokes equations efficiently at low Mach number and due to the large number of acoustic periods that are often required for such instabilities to reach limit cycles. An implicit numerical method for the solution of the compressible Navier-Stokes equations has been developed which avoids the acoustic CFL restriction, allowing for significant efficiency gains at low Mach number, while still resolving the low frequency acoustic modes of interest. In the limit of a uniform grid the numerical method causes no artificial damping of acoustic waves. New, non-reflecting boundary conditions have also been developed for use with the characteristic-based approach of Poinsot and Lele (1992). The new boundary conditions are implemented in a manner which allows for significant reduction of the computational domain of an LES by eliminating the need to perform LES in regions where one-dimensional acoustics significantly affect the instability but details of the hydrodynamics do not. These new numerical techniques have been demonstrated in an LES of an experimental combustor. The new techniques are shown to be an efficient means of performing LES of acoustic combustion

  12. Large-eddy simulation of turbulent circular jet flows

    SciTech Connect

    Jones, S. C.; Sotiropoulos, F.; Sale, M. J.

    2002-07-01

    This report presents a numerical method for carrying out large-eddy simulations (LES) of turbulent free shear flows and an application of a method to simulate the flow generated by a nozzle discharging into a stagnant reservoir. The objective of the study was to elucidate the complex features of the instantaneous flow field to help interpret the results of recent biological experiments in which live fish were exposed to the jet shear zone. The fish-jet experiments were conducted at the Pacific Northwest National Laboratory (PNNL) under the auspices of the U.S. Department of Energy’s Advanced Hydropower Turbine Systems program. The experiments were designed to establish critical thresholds of shear and turbulence-induced loads to guide the development of innovative, fish-friendly hydropower turbine designs.

  13. Large-eddy simulation of a plane wake

    NASA Technical Reports Server (NTRS)

    Ghosal, Sandip; Rogers, M. M.

    1994-01-01

    Previously the theoretical development leading to the dynamic localization model (DLM) for large-eddy simulation (LES) was presented. The method has been successfully applied to isotropic turbulence, channel flow, and the flow over a backward-facing step. Here we apply the model to the computation of the temporally developing place wake. The two main objectives of this project are: (1) Use the model to perform an LES of a time developing plane wake and compare the results with direction numerical simulation (DNS) data to see if important statistical measures can be readily predicted, and to provide a relative evaluation of the several versions of the model in terms of predictive capability and cost; and (2) If the tests in (1) show that the model generates reliable predictions, then use the LES to study various aspects of the physics of turbulent wakes and mixing layers.

  14. Large-eddy simulation of transitional channel flow

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Zang, Thomas A.

    1990-01-01

    A large-eddy simulation (LES) of transition in plane channel flow was carried out. The LES results were compared with those of a fine direct numerical simulation (DNS), and with those of a coarse DNS that uses the same mesh as the LES, but does not use a residual stress model. While at the early stages of transition, LES and coarse DNS give the same results: the presence of the residual stress model was found to be necessary to predict accurately mean velocity and Reynolds stress profiles during the late stages of transition (after the second spike stage). The evolution of single Fourier modes is also predicted more accurately by the LES than by the DNS. As small scales are generated, the dissipative character of the residual stress starts to reproduce correctly the energy cascade. As transition progresses, the flow approaches its fully developed turbulent state, the subgrid scales tend towards equilibrium, and the model becomes more accurate.

  15. Large-Eddy Simulation of Turbulent Wall-Pressure Fluctuations

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.

    1996-01-01

    Large-eddy simulations of a turbulent boundary layer with Reynolds number based on displacement thickness equal to 3500 were performed with two grid resolutions. The computations were continued for sufficient time to obtain frequency spectra with resolved frequencies that correspond to the most important structural frequencies on an aircraft fuselage. The turbulent stresses were adequately resolved with both resolutions. Detailed quantitative analysis of a variety of statistical quantities associated with the wall-pressure fluctuations revealed similar behavior for both simulations. The primary differences were associated with the lack of resolution of the high-frequency data in the coarse-grid calculation and the increased jitter (due to the lack of multiple realizations for averaging purposes) in the fine-grid calculation. A new curve fit was introduced to represent the spanwise coherence of the cross-spectral density.

  16. Implicit large eddy simulation of shock-driven material mixing.

    PubMed

    Grinstein, F F; Gowardhan, A A; Ristorcelli, J R

    2013-11-28

    Under-resolved computer simulations are typically unavoidable in practical turbulent flow applications exhibiting extreme geometrical complexity and a broad range of length and time scales. An important unsettled issue is whether filtered-out and subgrid spatial scales can significantly alter the evolution of resolved larger scales of motion and practical flow integral measures. Predictability issues in implicit large eddy simulation of under-resolved mixing of material scalars driven by under-resolved velocity fields and initial conditions are discussed in the context of shock-driven turbulent mixing. The particular focus is on effects of resolved spectral content and interfacial morphology of initial conditions on transitional and late-time turbulent mixing in the fundamental planar shock-tube configuration.

  17. Hydrothermal fluid flow and deformation in large calderas: Inferences from numerical simulations

    USGS Publications Warehouse

    Hurwitz, S.; Christiansen, L.B.; Hsieh, P.A.

    2007-01-01

    Inflation and deflation of large calderas is traditionally interpreted as being induced by volume change of a discrete source embedded in an elastic or viscoelastic half-space, though it has also been suggested that hydrothermal fluids may play a role. To test the latter hypothesis, we carry out numerical simulations of hydrothermal fluid flow and poroelastic deformation in calderas by coupling two numerical codes: (1) TOUGH2 [Pruess et al., 1999], which simulates flow in porous or fractured media, and (2) BIOT2 [Hsieh, 1996], which simulates fluid flow and deformation in a linearly elastic porous medium. In the simulations, high-temperature water (350??C) is injected at variable rates into a cylinder (radius 50 km, height 3-5 km). A sensitivity analysis indicates that small differences in the values of permeability and its anisotropy, the depth and rate of hydrothermal injection, and the values of the shear modulus may lead to significant variations in the magnitude, rate, and geometry of ground surface displacement, or uplift. Some of the simulated uplift rates are similar to observed uplift rates in large calderas, suggesting that the injection of aqueous fluids into the shallow crust may explain some of the deformation observed in calderas.

  18. SimGen: A General Simulation Method for Large Systems.

    PubMed

    Taylor, William R

    2017-02-03

    SimGen is a stand-alone computer program that reads a script of commands to represent complex macromolecules, including proteins and nucleic acids, in a structural hierarchy that can then be viewed using an integral graphical viewer or animated through a high-level application programming interface in C++. Structural levels in the hierarchy range from α-carbon or phosphate backbones through secondary structure to domains, molecules, and multimers with each level represented in an identical data structure that can be manipulated using the application programming interface. Unlike most coarse-grained simulation approaches, the higher-level objects represented in SimGen can be soft, allowing the lower-level objects that they contain to interact directly. The default motion simulated by SimGen is a Brownian-like diffusion that can be set to occur across all levels of representation in the hierarchy. Links can also be defined between objects, which, when combined with large high-level random movements, result in an effective search strategy for constraint satisfaction, including structure prediction from predicted pairwise distances. The implementation of SimGen makes use of the hierarchic data structure to avoid unnecessary calculation, especially for collision detection, allowing it to be simultaneously run and viewed on a laptop computer while simulating large systems of over 20,000 objects. It has been used previously to model complex molecular interactions including the motion of a myosin-V dimer "walking" on an actin fibre, RNA stem-loop packing, and the simulation of cell motion and aggregation. Several extensions to this original functionality are described.

  19. Large Eddy Simulation of Vertical Axis Wind Turbine Wakes

    NASA Astrophysics Data System (ADS)

    Shamsoddin, Sina; Porté-Agel, Fernando

    2014-05-01

    In this study, large-eddy simulation (LES) is combined with a turbine model to investigate the wake behind a vertical-axis wind turbine (VAWT) in a three dimensional turbulent flow. Two methods are used to model the subgrid-scale (SGS) stresses: (a) the Smagorinsky model, and (b) the modulated gradient model. To parameterize the effects of the VAWT on the flow, two VAWT models are developed: (a) the actuator surface model (ASM), in which the time-averaged turbine-induced forces are distributed on a surface swept by the turbine blades, i.e. the actuator surface, and (b) the actuator line model (ALM), in which the instantaneous blade forces are only spatially distributed on lines representing the blades, i.e. the actuator lines. This is the first time that LES is applied and validated for simulation of VAWT wakes by using either the ASM or the ALM techniques. In both models, blade-element theory is used to calculate the lift and drag forces on the blades. The results are compared with flow measurements in the wake of a model straight-bladed VAWT, carried out in the Institute de Méchanique et Statistique de la Turbulence (IMST) water channel. Different combinations of SGS models with VAWT models are studied and a fairly good overall agreement between simulation results and measurement data is observed. In general, the ALM is found to better capture the unsteady-periodic nature of the wake and shows a better agreement with the experimental data compared with the ASM. The modulated gradient model is also found to be a more reliable SGS stress modeling technique, compared with the Smagorinsky model, and it yields reasonable predictions of the mean flow and turbulence characteristics of a VAWT wake using its theoretically-determined model coefficient. Keywords: Vertical-axis wind turbines (VAWTs); VAWT wake; Large-eddy simulation; Actuator surface model; Actuator line model; Smagorinsky model; Modulated gradient model

  20. Large Eddy Simulation of Engineering Flows: A Bill Reynolds Legacy.

    NASA Astrophysics Data System (ADS)

    Moin, Parviz

    2004-11-01

    The term, Large eddy simulation, LES, was coined by Bill Reynolds, thirty years ago when he and his colleagues pioneered the introduction of LES in the engineering community. Bill's legacy in LES features his insistence on having a proper mathematical definition of the large scale field independent of the numerical method used, and his vision for using numerical simulation output as data for research in turbulence physics and modeling, just as one would think of using experimental data. However, as an engineer, Bill was pre-dominantly interested in the predictive capability of computational fluid dynamics and in particular LES. In this talk I will present the state of the art in large eddy simulation of complex engineering flows. Most of this technology has been developed in the Department of Energy's ASCI Program at Stanford which was led by Bill in the last years of his distinguished career. At the core of this technology is a fully implicit non-dissipative LES code which uses unstructured grids with arbitrary elements. A hybrid Eulerian/ Largangian approach is used for multi-phase flows, and chemical reactions are introduced through dynamic equations for mixture fraction and reaction progress variable in conjunction with flamelet tables. The predictive capability of LES is demonstrated in several validation studies in flows with complex physics and complex geometry including flow in the combustor of a modern aircraft engine. LES in such a complex application is only possible through efficient utilization of modern parallel super-computers which was recognized and emphasized by Bill from the beginning. The presentation will include a brief mention of computer science efforts for efficient implementation of LES.

  1. Safety limit of large-volume hepatic radiofrequency ablation in a rat model.

    PubMed

    Ng, Kelvin K; Lam, Chi Ming; Poon, Ronnie T; Shek, Tony W; Ho, David W; Fan, Sheung Tat

    2006-03-01

    Large-volume hepatic radiofrequency ablation (RFA) has been used to treat large liver tumors, but its safety limit is unknown. This study aimed to investigate the possible systemic responses of large-volume hepatic RFA and to estimate its safety limit in normal and cirrhotic rats. Large-volume hepatic RFA causes a significant systemic inflammatory reaction. Experimental study. University teaching hospital. Using the Cool-tip RF System (Radionics, Burlington, Mass), RFA was performed for different percentages of the liver volume by weight in normal and cirrhotic Sprague-Dawley rats. Changes in concentrations of serum inflammatory markers (tumor necrosis factor alpha [TNF-alpha] and interleukin [IL] 6), functions of various end organs, and survival rates were assessed. In the normal liver groups, the concentrations of TNF-alpha and IL-6 were significantly elevated in the early postoperative period when 50% (mean +/- SD TNF-alpha concentration, 130.3 +/- 15.6 pg/mL; mean +/- SD IL-6 concentration, 163.2 +/- 12.2 pg/mL) and 60% (mean +/- SD TNF-alpha concentration, 145.7 +/- 13.0 pg/mL; mean +/- SD IL-6 concentration, 180.8 +/- 11.0 pg/mL) of the liver volume were ablated compared with the control group (mean +/- SD TNF-alpha concentration, 30.4 +/- 9.9 pg/mL, P<.001; mean +/- SD IL-6 concentration, 28.4 +/- 6.7 pg/mL, P<.001). The concentrations of TNF-alpha and IL-6 in other groups remained similar to those in the control group. Thrombocytopenia, prolonged clotting time, and interstitial pneumonitis occurred when 50% and 60% of the liver volume were ablated. The 4-week survival rates were 100%, 60%, and 0% when 40%, 50%, and 60%, respectively, of the liver volume were ablated. Similar systemic inflammatory responses and poor survival rates were observed among the cirrhotic liver groups when 30% and 40% of the liver volume were ablated. The normal rats can tolerate RFA of 40% of the liver volume with minimal morbidity and no mortality whereas the cirrhotic rats can

  2. Rapid estimate of solid volume in large tuff cores using a gas pycnometer

    SciTech Connect

    Thies, C.; Geddis, A.M.; Guzman, A.G.

    1996-09-01

    A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1{degrees}C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm{sup 3} with less than 1% error.

  3. Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modeling

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-02-01

    Measuring the volume weighted velocity power spectrum suffers from a severe systematic error due to imperfect sampling of the velocity field from the inhomogeneous distribution of dark matter particles/halos in simulations or galaxies with velocity measurement. This "sampling artifact" depends on both the mean particle number density n¯P and the intrinsic large scale structure (LSS) fluctuation in the particle distribution. (1) We report robust detection of this sampling artifact in N -body simulations. It causes ˜12 % underestimation of the velocity power spectrum at k =0.1 h /Mpc for samples with n¯ P=6 ×10-3 (Mpc /h )-3 . This systematic underestimation increases with decreasing n¯P and increasing k . Its dependence on the intrinsic LSS fluctuations is also robustly detected. (2) All of these findings are expected based upon our theoretical modeling in paper I [P. Zhang, Y. Zheng, and Y. Jing, Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling, arXiv:1405.7125.]. In particular, the leading order theoretical approximation agrees quantitatively well with the simulation result for n¯ P≳6 ×10-4 (Mpc /h )-3 . Furthermore, we provide an ansatz to take high order terms into account. It improves the model accuracy to ≲1 % at k ≲0.1 h /Mpc over 3 orders of magnitude in n¯P and over typical LSS clustering from z =0 to z =2 . (3) The sampling artifact is determined by the deflection D field, which is straightforwardly available in both simulations and data of galaxy velocity. Hence the sampling artifact in the velocity power spectrum measurement can be self-calibrated within our framework. By applying such self-calibration in simulations, it is promising to determine the real large scale velocity bias of 1013M⊙ halos with ˜1 % accuracy, and that of lower mass halos with better accuracy. (4) In contrast to suppressing the velocity power spectrum at large scale, the sampling artifact causes an overestimation of the velocity

  4. Large-volume diamond cells for neutron diffraction above 90GPa

    SciTech Connect

    Boehler, Reinhard; Guthrie, Malcolm; Molaison, Jamie J; Moreira Dos Santos, Antonio F; Sinogeikin, Stanislav; Machida, Shinichi; Pradhan, Neelam; Tulk, Christopher A

    2013-01-01

    Quantitative high pressure neutron-diffraction measurements have traditionally required large sample volumes of at least 25 mm3 due to limited neutron flux. Therefore, pressures in these experiments have been limited to below 25 GPa. In comparison, for X-ray diffraction, sample volumes in conventional diamond cells for pressures up to 100 GPa have been less than 1 10 4 mm3. Here, we report a new design of strongly supported conical diamond anvils for neutron diffraction that has reached 94 GPa with a sample volume of 2 10 2 mm3, a 100-fold increase. This sample volume is sufficient to measure full neutron-diffraction patterns of D2O ice to this pressure at the high flux Spallation Neutrons and Pressure beamline at the Oak Ridge National Laboratory. This provides an almost fourfold extension of the previous pressure regime for such measurements.

  5. Periumbilical fat graft: a new resource to replace large volume in the orbit.

    PubMed

    Medel, Ramon; Vasquez, LuzMaria

    2014-10-01

    To describe the technique we use to obtain a fat graft from the periumbilical area to replace volume in our patients requiring total or partial orbital volume restoration or replacement. Under local anaesthesia a one-piece fat auto-graft is obtained from one of the quadrants of the periumbilical zone through a 10- to 15-mm incision at the umbilicus edge. Excised adipose tissue contains connective tracts, with medium and small vascular vessels with discrete thickened wall and preserved endothelium, with more blood cells, and less dead cells. Fat grafts are the ideal fillers for patients requiring orbital volume replacement. The periumbilical fat graft technique we describe is simple, safe and fast, the learning slope shallow and the results gratifying in both the replaced volume, and the donor area with an invisible scar. The amount of fat that can be obtained with this technique through a minimal incision can be large enough.

  6. Large-eddy simulation of a turbulent mixing layer

    NASA Technical Reports Server (NTRS)

    Mansour, N. N.; Ferziger, J. H.; Reynolds, W. C.

    1978-01-01

    The three dimensional, time dependent (incompressible) vorticity equations were used to simulate numerically the decay of isotropic box turbulence and time developing mixing layers. The vorticity equations were spatially filtered to define the large scale turbulence field, and the subgrid scale turbulence was modeled. A general method was developed to show numerical conservation of momentum, vorticity, and energy. The terms that arise from filtering the equations were treated (for both periodic boundary conditions and no stress boundary conditions) in a fast and accurate way by using fast Fourier transforms. Use of vorticity as the principal variable is shown to produce results equivalent to those obtained by use of the primitive variable equations.

  7. Simulated behaviour of large scale SCI rings and tori

    SciTech Connect

    Cha, Hojung; Knowles, A.; Daniel, R. Jr.

    1993-09-01

    SCI(Scalable Coherent Interface) is a new IEEE standard for a high speed interconnect in parallel processors. It is attracting interest because of its high bandwidth (1 GB/sec/link) and low latency. The default SCI topology is a ring, which does not scale well to large numbers of processors. This paper uses stochastic and trace-driven simulations to compare the performance of SCI-based parallel computers with a ring topology to those based on a torus topology. We also look at the effects of varying some of the internals of the SCI components.

  8. Large eddy simulation of the flow in a transpired channel

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Moin, Parviz; Ferziger, Joel

    1989-01-01

    The flow in a transpired channel has been computed by large eddy simulation. The numerical results compare very well with experimental data. Blowing decreases the wall shear stress and enhances turbulent fluctuations, while suction has the opposite effect. The wall layer thickness normalized by the local wall shear velocity and kinematic viscosity increases on the blowing side of the channel and decreases on the suction side. Suction causes more rapid decay of the spectra, larger mean streak spacing and higher two-point correlations. On the blowing side, the wall layer structures lie at a steeper angle to the wall, whereas on the suction side this angle is shallower.

  9. Contrail Formation in Aircraft Wakes Using Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Paoli, R.; Helie, J.; Poinsot, T. J.; Ghosal, S.

    2002-01-01

    In this work we analyze the issue of the formation of condensation trails ("contrails") in the near-field of an aircraft wake. The basic configuration consists in an exhaust engine jet interacting with a wing-tip training vortex. The procedure adopted relies on a mixed Eulerian/Lagrangian two-phase flow approach; a simple micro-physics model for ice growth has been used to couple ice and vapor phases. Large eddy simulations have carried out at a realistic flight Reynolds number to evaluate the effects of turbulent mixing and wake vortex dynamics on ice-growth characteristics and vapor thermodynamic properties.

  10. Large eddy simulation of the flow in a transpired channel

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Moin, Parviz; Ferziger, Joel

    1989-01-01

    The flow in a transpired channel has been computed by large eddy simulation. The numerical results compare very well with experimental data. Blowing decreases the wall shear stress and enhances turbulent fluctuations, while suction has the opposite effect. The wall layer thickness normalized by the local wall shear velocity and kinematic viscosity increases on the blowing side of the channel and decreases on the suction side. Suction causes more rapid decay of the spectra, larger mean streak spacing and higher two-point correlations. On the blowing side, the wall layer structures lie at a steeper angle to the wall, whereas on the suction side this angle is shallower.

  11. Large perturbation flow field analysis and simulation for supersonic inlets

    NASA Technical Reports Server (NTRS)

    Varner, M. O.; Martindale, W. R.; Phares, W. J.; Kneile, K. R.; Adams, J. C., Jr.

    1984-01-01

    An analysis technique for simulation of supersonic mixed compression inlets with large flow field perturbations is presented. The approach is based upon a quasi-one-dimensional inviscid unsteady formulation which includes engineering models of unstart/restart, bleed, bypass, and geometry effects. Numerical solution of the governing time dependent equations of motion is accomplished through a shock capturing finite difference algorithm, of which five separate approaches are evaluated. Comparison with experimental supersonic wind tunnel data is presented to verify the present approach for a wide range of transient inlet flow conditions.

  12. On integrating large eddy simulation and laboratory turbulent flow experiments.

    PubMed

    Grinstein, Fernando F

    2009-07-28

    Critical issues involved in large eddy simulation (LES) experiments relate to the treatment of unresolved subgrid scale flow features and required initial and boundary condition supergrid scale modelling. The inherently intrusive nature of both LES and laboratory experiments is noted in this context. Flow characterization issues becomes very challenging ones in validation and computational laboratory studies, where potential sources of discrepancies between predictions and measurements need to be clearly evaluated and controlled. A special focus of the discussion is devoted to turbulent initial condition issues.

  13. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.

    1990-01-01

    This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.

  14. Parallel finite element simulation of large ram-air parachutes

    NASA Astrophysics Data System (ADS)

    Kalro, V.; Aliabadi, S.; Garrard, W.; Tezduyar, T.; Mittal, S.; Stein, K.

    1997-06-01

    In the near future, large ram-air parachutes are expected to provide the capability of delivering 21 ton payloads from altitudes as high as 25,000 ft. In development and test and evaluation of these parachutes the size of the parachute needed and the deployment stages involved make high-performance computing (HPC) simulations a desirable alternative to costly airdrop tests. Although computational simulations based on realistic, 3D, time-dependent models will continue to be a major computational challenge, advanced finite element simulation techniques recently developed for this purpose and the execution of these techniques on HPC platforms are significant steps in the direction to meet this challenge. In this paper, two approaches for analysis of the inflation and gliding of ram-air parachutes are presented. In one of the approaches the point mass flight mechanics equations are solved with the time-varying drag and lift areas obtained from empirical data. This approach is limited to parachutes with similar configurations to those for which data are available. The other approach is 3D finite element computations based on the Navier-Stokes equations governing the airflow around the parachute canopy and Newtons law of motion governing the 3D dynamics of the canopy, with the forces acting on the canopy calculated from the simulated flow field. At the earlier stages of canopy inflation the parachute is modelled as an expanding box, whereas at the later stages, as it expands, the box transforms to a parafoil and glides. These finite element computations are carried out on the massively parallel supercomputers CRAY T3D and Thinking Machines CM-5, typically with millions of coupled, non-linear finite element equations solved simultaneously at every time step or pseudo-time step of the simulation.

  15. A Large Motion Suspension System for Simulation of Orbital Deployment

    NASA Technical Reports Server (NTRS)

    Straube, T. M.; Peterson, L. D.

    1994-01-01

    This paper describes the design and implementation of a vertical degree of freedom suspension system which provides a constant force off-load condition to counter gravity over large displacements. By accommodating motions up to one meter for structures weighing up to 100 pounds, the system is useful for experiments which simulate the on-orbit deployment of spacecraft components. A unique aspect of this system is the combination of a large stroke passive off-load device augmented by electromotive torque actuated force feedback. The active force feedback has the effect of reducing breakaway friction by an order of magnitude over the passive system alone. The paper describes the development of the suspension hardware and the feedback control algorithm. Experiments were performed to verify the suspensions system's ability to provide a gravity off-load as well as its effect on the modal characteristics of a test article.

  16. Synthetic turbulence, fractal interpolation, and large-eddy simulation.

    PubMed

    Basu, Sukanta; Foufoula-Georgiou, Efi; Porté-Agel, Fernando

    2004-08-01

    Fractal interpolation has been proposed in the literature as an efficient way to construct closure models for the numerical solution of coarse-grained Navier-Stokes equations. It is based on synthetically generating a scale-invariant subgrid-scale field and analytically evaluating its effects on large resolved scales. In this paper, we propose an extension of previous work by developing a multiaffine fractal interpolation scheme and demonstrate that it preserves not only the fractal dimension but also the higher-order structure functions and the non-Gaussian probability density function of the velocity increments. Extensive a priori analyses of atmospheric boundary layer measurements further reveal that this multiaffine closure model has the potential for satisfactory performance in large-eddy simulations. The pertinence of this newly proposed methodology in the case of passive scalars is also discussed.

  17. Microstructure from simulated Brownian suspension flows at large shear rate

    NASA Astrophysics Data System (ADS)

    Morris, Jeffrey F.; Katyal, Bhavana

    2002-06-01

    Pair microstructure of concentrated Brownian suspensions in simple-shear flow is studied by sampling of configurations from dynamic simulations by the Stokesian Dynamics technique. Simulated motions are three dimensional with periodic boundary conditions to mimic an infinitely extended suspension. Hydrodynamic interactions through Newtonian fluid and Brownian motion are the only physical influences upon the motion of the monodisperse hard-sphere particles. The dimensionless parameters characterizing the suspension are the particle volume fraction and Péclet number, defined, respectively, as φ=(4π/3)na3 with n the number density and a the sphere radius, and Pe=6πηγ˙a3/kT with η the fluid viscosity, γ˙ the shear rate, and kT the thermal energy. The majority of the results reported are from simulations at Pe=1000; results of simulations at Pe=1, 25, and 100 are also reported for φ=0.3 and φ=0.45. The pair structure is characterized by the pair distribution function, g(r)=P1|1(r)/n, where P1|1(r) is the conditional probability of finding a pair at a separation vector r. The structure under strong shearing exhibits an accumulation of pair probability at contact, and angular distortion (from spherical symmetry at Pe=0), with both effects increasing with Pe. Flow simulations were performed at Pe=1000 for eight volume fractions in the range 0.2⩽φ⩽0.585. For φ=0.2-0.3, the pair structure at contact, g(|r|=2)≡g(2), is found to exhibit a single region of strong correlation, g(2)≫1, at points around the axis of compression, with a particle-deficient wake in the extensional zones. A qualitative change in microstructure is observed between φ=0.3 and φ=0.37. For φ⩾0.37, the maximum g(2) lies at points in the shear plane nearly on the x axis of the bulk simple shear flow Ux=γ˙y, while at smaller φ, the maximum g(2) lies near the compressional axis; long-range string ordering is not observed. For φ=0.3 and φ=0.45, g(2)˜Pe0.7 for 1⩽Pe⩽1000, a

  18. High Energy Performance Tests of Large Volume LaBr{sub 3}:Ce Detector

    SciTech Connect

    Naqvi, A.A.; Gondal, M.A.; Khiari, F.Z.; Dastageer, M.A.; Maslehuddin, M.M.; Al-Amoudi, O.S.B.

    2015-07-01

    High energy prompt gamma ray tests of a large volume cylindrical 100 mm x 100 mm (height x diameter) LaBr{sub 3}:Ce detector were carried out using a portable neutron generator-based Prompt Gamma Neutron Activation Analysis (PGNAA) setup. In this study prompt gamma-rays yield were measured from water samples contaminated with toxic elements such nickel, chromium and mercury compounds with gamma ray energies up to 10 MeV. The experimental yield of prompt gamma-rays from toxic elements were compared with the results of Monte Carlo calculations. In spite of its higher intrinsic background due to its larger volume, an excellent agreement between the experimental and calculated yields of high energy gamma-rays from Ni, Cr and Hg samples has been achieved for the large volume LaBr{sub 3}:Ce detector. (authors)

  19. A universal and flexible theodolite-camera system for making accurate measurements over large volumes

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaohu; Zhu, Zhaokun; Yuan, Yun; Li, Lichun; Sun, Xiangyi; Yu, Qifeng; Ou, Jianliang

    2012-11-01

    Typically, optical measurement systems can achieve high accuracy over a limited volume, or cover a large volume with low accuracy. In this paper, we propose a universal way of integrating a camera with a theodolite to construct a theodolite-camera (TC) measurement system that can make measurements over a large volume with high accuracy. The TC inherits the advantages of high flexibility and precision from theodolite and camera, but it avoids the need to perform elaborate adjustments on the camera and theodolite. The TC provides a universal and flexible approach to the camera-on-theodolite system. We describe three types of TC based separately on: (i) a total station; (ii) a theodolite; and (iii) a general rotation frame. We also propose three corresponding calibration methods for the different TCs. Experiments have been conducted to verify the measuring accuracy of each of the three types of TC.

  20. Mechanically Cooled Large-Volume Germanium Detector Systems for Nuclear Explosion Monitoring DOENA27323-1

    SciTech Connect

    Hull, E.L.

    2006-07-28

    Compact maintenance free mechanical cooling systems are being developed to operate large volume germanium detectors for field applications. To accomplish this we are utilizing a newly available generation of Stirling-cycle mechanical coolers to operate the very largest volume germanium detectors with no maintenance. The user will be able to leave these systems unplugged on the shelf until needed. The flip of a switch will bring a system to life in ~ 1 hour for measurements. The maintenance-free operating lifetime of these detector systems will exceed 5 years. These features are necessary for remote long-duration liquid-nitrogen free deployment of large-volume germanium gamma-ray detector systems for Nuclear Explosion Monitoring. The Radionuclide Aerosol Sampler/Analyzer (RASA) will greatly benefit from the availability of such detectors by eliminating the need for liquid nitrogen at RASA sites while still allowing the very largest available germanium detectors to be reliably utilized.

  1. Evaluating lossy data compression on climate simulation data within a large ensemble

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter

    2016-12-01

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying

  2. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE PAGES

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; ...

    2016-12-07

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that

  3. Evaluating lossy data compression on climate simulation data within a large ensemble

    SciTech Connect

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter

    2016-12-07

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying

  4. A general method for assessing the effects of uncertainty in individual-tree volume model predictions on large-area volume estimates with a subtropical forest illustration

    Treesearch

    Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans

    2015-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...

  5. SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering.

    PubMed

    Hadwiger, Markus; Al-Awami, Ali K; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter

    2017-08-29

    Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

  6. A New Electropositive Filter for Concentrating Enterovirus and Norovirus from Large Volumes of Water - MCEARD

    EPA Science Inventory

    The detection of enteric viruses in environmental water usually requires the concentration of viruses from large volumes of water. The 1MDS electropositive filter is commonly used for concentrating enteric viruses from water but unfortunately these filters are not cost-effective...

  7. Development of a Solid Phase Extraction Method for Agricultural Pesticides in Large-Volume Water Samples

    EPA Science Inventory

    An analytical method using solid phase extraction (SPE) and analysis by gas chromatography/mass spectrometry (GC/MS) was developed for the trace determination of a variety of agricultural pesticides and selected transformation products in large-volume high-elevation lake water sa...

  8. A New Electropositive Filter for Concentrating Enterovirus and Norovirus from Large Volumes of Water - MCEARD

    EPA Science Inventory

    The detection of enteric viruses in environmental water usually requires the concentration of viruses from large volumes of water. The 1MDS electropositive filter is commonly used for concentrating enteric viruses from water but unfortunately these filters are not cost-effective...

  9. Large Scale Information Processing System. Volume I. Compiler, Natural Language, and Information Processing.

    ERIC Educational Resources Information Center

    Peterson, Philip L.; And Others

    This volume, the first of three dealing with a number of investigations and studies into the formal structure, advanced technology and application of large scale information processing systems, is concerned with the areas of compiler languages, natural languages and information storage and retrieval. The first report is entitled "Semantics and…

  10. Development of a Solid Phase Extraction Method for Agricultural Pesticides in Large-Volume Water Samples

    EPA Science Inventory

    An analytical method using solid phase extraction (SPE) and analysis by gas chromatography/mass spectrometry (GC/MS) was developed for the trace determination of a variety of agricultural pesticides and selected transformation products in large-volume high-elevation lake water sa...

  11. Large eddy simulation of incompressible turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Moin, P.; Reynolds, W. C.; Ferziger, J. H.

    1978-01-01

    The three-dimensional, time-dependent primitive equations of motion were numerically integrated for the case of turbulent channel flow. A partially implicit numerical method was developed. An important feature of this scheme is that the equation of continuity is solved directly. The residual field motions were simulated through an eddy viscosity model, while the large-scale field was obtained directly from the solution of the governing equations. An important portion of the initial velocity field was obtained from the solution of the linearized Navier-Stokes equations. The pseudospectral method was used for numerical differentiation in the horizontal directions, and second-order finite-difference schemes were used in the direction normal to the walls. The large eddy simulation technique is capable of reproducing some of the important features of wall-bounded turbulent flows. The resolvable portions of the root-mean square wall pressure fluctuations, pressure velocity-gradient correlations, and velocity pressure-gradient correlations are documented.

  12. Assessment of dynamic closure for premixed combustion large eddy simulation

    NASA Astrophysics Data System (ADS)

    Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan

    2015-09-01

    Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.

  13. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles; Moin, Parviz

    2002-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.

  14. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Moin, Parviz

    2003-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.

  15. Large-eddy simulation of the very stable boundary layer

    NASA Astrophysics Data System (ADS)

    Chinita, M. J.; Matheou, G.

    2016-12-01

    The stable boundary layer is ubiquitous and typically forms at night when the ground radiatively cools and in polar regions throughout the day. Stable stratification and the associated reduction in the energetic scales in combination with the large anisotropy of turbulent motions challenge numerical models. This modeling difficulty also affects large-eddy simulation (LES) methods leading to scarce LES results for very stable conditions. In contrast, the NWP of convective flows has greatly benefited from the ample availability of high quality LES data. In order to overcome these limitations, a novel LES model setup is developed to enable the modeling of very stable boundary layers. A series of Ekman layer-type boundary layers at various surface cooling rates, geotropic winds and latitudes (rotation rates) is presented. A temperature surface condition is applied in the LES. The surface heat flux is dynamically computed byresolving the surface layer since the often-used Monin-Obukhov similarity theory cannot represent very stable conditions. Depending on the conditions, the LES gracefully transitions to a direct numerical simulation (DNS) where the flow becomes fully resolved. Two stability regimes can be discerned based on vertical profiles of the Richardson number. Overall, the model predicts that turbulence is very resilient with respect to stability. Temperature and velocity fluctuations persist even at high Richardson numbers. The nature of the fluctuations, i.e., due to turbulence/overturning or waves, is discussed. Scaling relations and spectra are also presented and discussed.

  16. Large eddy simulation of a pumped- storage reservoir

    NASA Astrophysics Data System (ADS)

    Launay, Marina; Leite Ribeiro, Marcelo; Roman, Federico; Armenio, Vincenzo

    2016-04-01

    The last decades have seen an increasing number of pumped-storage hydropower projects all over the world. Pumped-storage schemes move water between two reservoirs located at different elevations to store energy and to generate electricity following the electricity demand. Thus the reservoirs can be subject to important water level variations occurring at the daily scale. These new cycles leads to changes in the hydraulic behaviour of the reservoirs. Sediment dynamics and sediment budgets are modified, sometimes inducing problems of erosion and deposition within the reservoirs. With the development of computer performances, the use of numerical techniques has become popular for the study of environmental processes. Among numerical techniques, Large Eddy Simulation (LES) has arisen as an alternative tool for problems characterized by complex physics and geometries. This work uses the LES-COAST Code, a LES model under development in the framework of the Seditrans Project, for the simulation of an Upper Alpine Reservoir of a pumped-storage scheme. Simulations consider the filling (pump mode) and emptying (turbine mode) of the reservoir. The hydraulic results give a better understanding of the processes occurring within the reservoir. They are considered for an assessment of the sediment transport processes and of their consequences.

  17. Large eddy simulation of unsteady lean stratified premixed combustion

    SciTech Connect

    Duwig, C.; Fureby, C.

    2007-10-15

    Premixed turbulent flame-based technologies are rapidly growing in importance, with applications to modern clean combustion devices for both power generation and aeropropulsion. However, the gain in decreasing harmful emissions might be canceled by rising combustion instabilities. Unwanted unsteady flame phenomena that might even destroy the whole device have been widely reported and are subject to intensive studies. In the present paper, we use unsteady numerical tools for simulating an unsteady and well-documented flame. Computations were performed for nonreacting, perfectly premixed and stratified premixed cases using two different numerical codes and different large-eddy-simulation-based flamelet models. Nonreacting simulations are shown to agree well with experimental data, with the LES results capturing the mean features (symmetry breaking) as well as the fluctuation level of the turbulent flow. For reacting cases, the uncertainty induced by the time-averaging technique limited the comparisons. Given an estimate of the uncertainty, the numerical results were found to reproduce well the experimental data in terms both of mean flow field and of fluctuation levels. In addition, it was found that despite relying on different assumptions/simplifications, both numerical tools lead to similar predictions, giving confidence in the results. Moreover, we studied the flame dynamics and particularly the response to a periodic pulsation. We found that above a certain excitation level, the flame dynamic changes and becomes rather insensitive to the excitation/instability amplitude. Conclusions regarding the self-growth of thermoacoustic waves were drawn. (author)

  18. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  19. A Large Eddy Simulation Study for upstream wind energy conditioning

    NASA Astrophysics Data System (ADS)

    Sharma, V.; Calaf, M.; Parlange, M. B.

    2013-12-01

    The wind energy industry is increasingly focusing on optimal power extraction strategies based on layout design of wind farms and yaw alignment algorithms. Recent field studies by Mikkelsen et al. (Wind Energy, 2013) have explored the possibility of using wind lidar technology installed at hub height to anticipate incoming wind direction and strength for optimizing yaw alignment. In this work we study the benefits of using remote sensing technology for predicting the incoming flow by using large eddy simulations of a wind farm. The wind turbines are modeled using the classic actuator disk concept with rotation, together with a new algorithm that permits the turbines to adapt to varying flow directions. This allows for simulations of a more realistic atmospheric boundary layer driven by a time-varying geostrophic wind. Various simulations are performed to investigate possible improvement in power generation by utilizing upstream data. Specifically, yaw-correction of the wind-turbine is based on spatio-temporally averaged wind values at selected upstream locations. Velocity and turbulence intensity are also considered at those locations. A base case scenario with the yaw alignment varying according to wind data measured at the wind turbine's hub is also used for comparison. This reproduces the present state of the art where wind vanes and cup anemometers installed behind the rotor blades are used for alignment control.

  20. Unsteady RANS and Large Eddy simulations of multiphase diesel injection

    NASA Astrophysics Data System (ADS)

    Philipp, Jenna; Green, Melissa; Akih-Kumgeh, Benjamin

    2015-11-01

    Unsteady Reynolds Averaged Navier-Stokes (URANS) and Large Eddy Simulations (LES) of two-phase flow and evaporation of high pressure diesel injection into a quiescent, high temperature environment is investigated. Unsteady RANS and LES are turbulent flow simulation approaches used to determine complex flow fields. The latter allows for more accurate predictions of complex phenomena such as turbulent mixing and physio-chemical processes associated with diesel combustion. In this work we investigate a high pressure diesel injection using the Euler-Lagrange method for multiphase flows as implemented in the Star-CCM+ CFD code. A dispersed liquid phase is represented by Lagrangian particles while the multi-component gas phase is solved using an Eulerian method. Results obtained from the two approaches are compared with respect to spray penetration depth and air entrainment. They are also compared with experimental data taken from the Sandia Engine Combustion Network for ``Spray A''. Characteristics of primary and secondary atomization are qualitatively evaluated for all simulation modes.

  1. Large eddy simulations of a turbulent thermal plume

    NASA Astrophysics Data System (ADS)

    Yan, Zhenghua H.

    2007-04-01

    Large eddy simulations of a three-dimensional turbulent thermal plume in an open environment have been carried out using a self-developed parallel computational fluid dynamics code SMAFS (smoke movement and flame spread) to study the thermal plume’s dynamics including its puffing, self-preserving and air entrainment. In the simulation, the sub-grid stress was modeled using both the standard Smagorinsky and the buoyancy modified Smagorinsky models, which were compared. The sub-grid scale (SGS) scalar flux in the filtered enthalpy transport equation was modeled based on a simple gradient transport hypothesis with constant SGS Prandtl number. The effect of the Smagorinsky model constant and the SGS Prandtl number were examined. The computation results were compared with experimental measurements, thermal plume theory and empirical correlations, showing good agreement. It is found that both the buoyancy modification and the SGS turbulent Prandtl number have little influence on simulation. However, the SGS model constant C s has a significant effect on the prediction of plume spreading, although it does not affect much the prediction of puffing.

  2. Large Eddy Simulations of Colorless Distributed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Abdulrahman, Husam F.; Jaberi, Farhad; Gupta, Ashwani

    2014-11-01

    Development of efficient and low-emission colorless distributed combustion (CDC) systems for gas turbine applications require careful examination of the role of various flow and combustion parameters. Numerical simulations of CDC in a laboratory-scale combustor have been conducted to carefully examine the effects of these parameters on the CDC. The computational model is based on a hybrid modeling approach combining large eddy simulation (LES) with the filtered mass density function (FMDF) equations, solved with high order numerical methods and complex chemical kinetics. The simulated combustor operates based on the principle of high temperature air combustion (HiTAC) and has shown to significantly reduce the NOx, and CO emissions while improving the reaction pattern factor and stability without using any flame stabilizer and with low pressure drop and noise. The focus of the current work is to investigate the mixing of air and hydrocarbon fuels and the non-premixed and premixed reactions within the combustor by the LES/FMDF with the reduced chemical kinetic mechanisms for the same flow conditions and configurations investigated experimentally. The main goal is to develop better CDC with higher mixing and efficiency, ultra-low emission levels and optimum residence time. The computational results establish the consistency and the reliability of LES/FMDF and its Lagrangian-Eulerian numerical methodology.

  3. Surrogate population models for large-scale neural simulations.

    PubMed

    Tripp, Bryan P

    2015-06-01

    Because different parts of the brain have rich interconnections, it is not possible to model small parts realistically in isolation. However, it is also impractical to simulate large neural systems in detail. This article outlines a new approach to multiscale modeling of neural systems that involves constructing efficient surrogate models of populations. Given a population of neuron models with correlated activity and with specific, nonrandom connections, a surrogate model is constructed in order to approximate the aggregate outputs of the population. The surrogate model requires less computation than the neural model, but it has a clear and specific relationship with the neural model. For example, approximate spike rasters for specific neurons can be derived from a simulation of the surrogate model. This article deals specifically with neural engineering framework (NEF) circuits of leaky-integrate-and-fire point neurons. Weighted sums of spikes are modeled by interpolating over latent variables in the population activity, and linear filters operate on gaussian random variables to approximate spike-related fluctuations. It is found that the surrogate models can often closely approximate network behavior with orders-of-magnitude reduction in computational demands, although there are certain systematic differences between the spiking and surrogate models. Since individual spikes are not modeled, some simulations can be performed with much longer steps sizes (e.g., 20 ms). Possible extensions to non-NEF networks and to more complex neuron models are discussed.

  4. Upgrade Of ESA Large Space Simulator For Providing Mercury Environment

    NASA Astrophysics Data System (ADS)

    Messing, Rene; Popovitch, Alexandre; Tavares, Andre; Sablerolle, Steven

    2012-07-01

    When orbiting Mercury, the BepiColombo spacecraft will have to survive direct sunlight ten times more intense than in the Earth's vicinity, and the infrared radiation from the planet's surface, which exceeds 400°C at its hottest point. In order to simulate the environment for testing the spacecraft in thermal conditions as representative as possible to those it will meet in Mercury’s orbit, it was required to modify the ESTEC Large Space Simulator (LSS) for providing a 10 Solar Constant (SC) illumination. The following test facility adaptations are described: - Investigate powerful lamps - Configure the LSS mirror from 6m to a 2.7m-diameter light beam - Develop a fast flux mapping system - Procure a 10 SC absolute radiometer standard - Replace the sun simulator flux control sensors - Add a dedicated shroud to absorb the high flux - Add a levelling table to adjust heat pipes - Add infra-red cameras for contactless high temperature measurements. The facility performance during the test of one of the BepiColombo modules is reviewed.

  5. Large-timestep mover for particle simulations of arbitrarilymagnetized species

    SciTech Connect

    Cohen, R.H.; Friedman, A.; Grote, D.P.; Vay, J-L.

    2007-03-26

    For self-consistent ion-beam simulations including electron motion, it is desirable to be able to follow electron dynamics accurately without being constrained by the electron cyclotron timescale. To this end, we have developed a particle-advance that interpolates between full particle dynamics and drift motion. By making a proper choice of interpolation parameter, simulation particles experience physically correct parallel dynamics, drift motion, and gyroradius when the timestep is large compared to the cyclotron period, though the effective gyro frequency is artificially low; in the opposite timestep limit, the method approaches a conventional Boris particle push. By combining this scheme with a Poisson solver that includes an interpolated form of the polarization drift in the dielectric response, the movers utility can be extended to higher-density problems where the plasma frequency of the species being advanced exceeds its cyclotron frequency. We describe a series of tests of the mover and its application to simulation of electron clouds in heavy-ion accelerators.

  6. The large-scale properties of simulated cosmological magnetic fields

    NASA Astrophysics Data System (ADS)

    Marinacci, Federico; Vogelsberger, Mark; Mocz, Philip; Pakmor, Rüdiger

    2015-11-01

    We perform uniformly sampled large-scale cosmological simulations including magnetic fields with the moving mesh code AREPO. We run two sets of MHD simulations: one including adiabatic gas physics only; the other featuring the fiducial feedback model of the Illustris simulation. In the adiabatic case, the magnetic field amplification follows the B ∝ ρ2/3 scaling derived from `flux-freezing' arguments, with the seed field strength providing an overall normalization factor. At high baryon overdensities the amplification is enhanced by shear flows and turbulence. Feedback physics and the inclusion of radiative cooling change this picture dramatically. In haloes, gas collapses to much larger densities and the magnetic field is amplified strongly and to the same maximum intensity irrespective of the initial seed field of which any memory is lost. At lower densities a dependence on the seed field strength and orientation, which in principle can be used to constrain models of cosmic magnetogenesis, is still present. Inside the most massive haloes magnetic fields reach values of ˜ 10-100 μG, in agreement with galaxy cluster observations. The topology of the field is tangled and gives rise to rotation measure signals in reasonable agreement with the observations. However, the rotation measure signal declines too rapidly towards larger radii as compared to observational data.

  7. Large eddy simulation of mechanical mixing in anaerobic digesters.

    PubMed

    Wu, Binxin

    2012-03-01

    A comprehensive study of anaerobic digestion requires an advanced turbulence model technique to accurately predict mixing flow patterns because the digestion process that involves mass transfer between anaerobes and their substrates is primarily dependent on detailed information about the fine structure of turbulence in the digesters. This study presents a large eddy simulation (LES) of mechanical agitation of non-Newtonian fluids in anaerobic digesters, in which the sliding mesh method is used to characterize the impeller rotation. The three subgrid scale (SGS) models investigated are: (i) Smagorinsky-Lilly model, (ii) wall-adapting local eddy-viscosity model, and (iii) kinetic energy transport (KET) model. The simulation results show that the three SGS models produce very similar flow fields. A comparison of the simulated and measured axial velocities indicates that the LES profile shapes are in general agreement with the experimental data but they differ markedly in velocity magnitudes. A check of impeller power and flow numbers demonstrates that all the SGS models give excellent predictions, with the KET model performing the best. Moreover, the performance of six Reynolds-averaged Navier-Stokes turbulence models are assessed and compared with the LES results.

  8. Large-Eddy simulation of pulsatile blood flow.

    PubMed

    Paul, Manosh C; Mamun Molla, Md; Roditi, Giles

    2009-01-01

    Large-Eddy simulation (LES) is performed to study pulsatile blood flow through a 3D model of arterial stenosis. The model is chosen as a simple channel with a biological type stenosis formed on the top wall. A sinusoidal non-additive type pulsation is assumed at the inlet of the model to generate time dependent oscillating flow in the channel and the Reynolds number of 1200, based on the channel height and the bulk velocity, is chosen in the simulations. We investigate in detail the transition-to-turbulent phenomena of the non-additive pulsatile blood flow downstream of the stenosis. Results show that the high level of flow recirculation associated with complex patterns of transient blood flow have a significant contribution to the generation of the turbulent fluctuations found in the post-stenosis region. The importance of using LES in modelling pulsatile blood flow is also assessed in the paper through the prediction of its sub-grid scale contributions. In addition, some important results of the flow physics are achieved from the simulations, these are presented in the paper in terms of blood flow velocity, pressure distribution, vortices, shear stress, turbulent fluctuations and energy spectra, along with their importance to the relevant medical pathophysiology.

  9. Shuttle mission simulator baseline definition report, volume 1

    NASA Technical Reports Server (NTRS)

    Burke, J. F.; Small, D. E.

    1973-01-01

    A baseline definition of the space shuttle mission simulator is presented. The subjects discussed are: (1) physical arrangement of the complete simulator system in the appropriate facility, with a definition of the required facility modifications, (2) functional descriptions of all hardware units, including the operational features, data demands, and facility interfaces, (3) hardware features necessary to integrate the items into a baseline simulator system to include the rationale for selecting the chosen implementation, and (4) operating, maintenance, and configuration updating characteristics of the simulator hardware.

  10. Evaluation of Bacillus oleronius as a Biological Indicator for Terminal Sterilization of Large-Volume Parenterals.

    PubMed

    Izumi, Masamitsu; Fujifuru, Masato; Okada, Aki; Takai, Katsuya; Takahashi, Kazuhiro; Udagawa, Takeshi; Miyake, Makoto; Naruyama, Shintaro; Tokuda, Hiroshi; Nishioka, Goro; Yoden, Hikaru; Aoki, Mitsuo

    2016-01-01

    In the production of large-volume parenterals in Japan, equipment and devices such as tanks, pipework, and filters used in production processes are exhaustively cleaned and sterilized, and the cleanliness of water for injection, drug materials, packaging materials, and manufacturing areas is well controlled. In this environment, the bioburden is relatively low, and less heat resistant compared with microorganisms frequently used as biological indicators such as Geobacillus stearothermophilus (ATCC 7953) and Bacillus subtilis 5230 (ATCC 35021). Consequently, the majority of large-volume parenteral solutions in Japan are manufactured under low-heat sterilization conditions of F0 <2 min, so that loss of clarity of solutions and formation of degradation products of constituents are minimized. Bacillus oleronius (ATCC 700005) is listed as a biological indicator in "Guidance on the Manufacture of Sterile Pharmaceutical Products Produced by Terminal Sterilization" (guidance in Japan, issued in 2012). In this study, we investigated whether B. oleronius is an appropriate biological indicator of the efficacy of low-heat, moist-heat sterilization of large-volume parenterals. Specifically, we investigated the spore-forming ability of this microorganism in various cultivation media and measured the D-values and z-values as parameters of heat resistance. The D-values and z-values changed depending on the constituents of large-volume parenteral products. Also, the spores from B. oleronius showed a moist-heat resistance that was similar to or greater than many of the spore-forming organisms isolated from Japanese parenteral manufacturing processes. Taken together, these results indicate that B. oleronius is suitable as a biological indicator for sterility assurance of large-volume parenteral solutions subjected to low-heat, moist-heat terminal sterilization.

  11. Large-Eddy Simulation Code Developed for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2003-01-01

    A large-eddy simulation (LES) code was developed at the NASA Glenn Research Center to provide more accurate and detailed computational analyses of propulsion flow fields. The accuracy of current computational fluid dynamics (CFD) methods is limited primarily by their inability to properly account for the turbulent motion present in virtually all propulsion flows. Because the efficiency and performance of a propulsion system are highly dependent on the details of this turbulent motion, it is critical for CFD to accurately model it. The LES code promises to give new CFD simulations an advantage over older methods by directly computing the large turbulent eddies, to correctly predict their effect on a propulsion system. Turbulent motion is a random, unsteady process whose behavior is difficult to predict through computer simulations. Current methods are based on Reynolds-Averaged Navier- Stokes (RANS) analyses that rely on models to represent the effect of turbulence within a flow field. The quality of the results depends on the quality of the model and its applicability to the type of flow field being studied. LES promises to be more accurate because it drastically reduces the amount of modeling necessary. It is the logical step toward improving turbulent flow predictions. In LES, the large-scale dominant turbulent motion is computed directly, leaving only the less significant small turbulent scales to be modeled. As part of the prediction, the LES method generates detailed information on the turbulence itself, providing important information for other applications, such as aeroacoustics. The LES code developed at Glenn for propulsion flow fields is being used to both analyze propulsion system components and test improved LES algorithms (subgrid-scale models, filters, and numerical schemes). The code solves the compressible Favre-filtered Navier- Stokes equations using an explicit fourth-order accurate numerical scheme, it incorporates a compressible form of

  12. Anatomically Detailed and Large-Scale Simulations Studying Synapse Loss and Synchrony Using NeuroBox

    PubMed Central

    Breit, Markus; Stepniewski, Martin; Grein, Stephan; Gottmann, Pascal; Reinhardt, Lukas; Queisser, Gillian

    2016-01-01

    The morphology of neurons and networks plays an important role in processing electrical and biochemical signals. Based on neuronal reconstructions, which are becoming abundantly available through databases such as NeuroMorpho.org, numerical simulations of Hodgkin-Huxley-type equations, coupled to biochemical models, can be performed in order to systematically investigate the influence of cellular morphology and the connectivity pattern in networks on the underlying function. Development in the area of synthetic neural network generation and morphology reconstruction from microscopy data has brought forth the software tool NeuGen. Coupling this morphology data (either from databases, synthetic, or reconstruction) to the simulation platform UG 4 (which harbors a neuroscientific portfolio) and VRL-Studio, has brought forth the extendible toolbox NeuroBox. NeuroBox allows users to perform numerical simulations on hybrid-dimensional morphology representations. The code basis is designed in a modular way, such that e.g., new channel or synapse types can be added to the library. Workflows can be specified through scripts or through the VRL-Studio graphical workflow representation. Third-party tools, such as ImageJ, can be added to NeuroBox workflows. In this paper, NeuroBox is used to study the electrical and biochemical effects of synapse loss vs. synchrony in neurons, to investigate large morphology data sets within detailed biophysical simulations, and used to demonstrate the capability of utilizing high-performance computing infrastructure for large scale network simulations. Using new synapse distribution methods and Finite Volume based numerical solvers for compartment-type models, our results demonstrate how an increase in synaptic synchronization can compensate synapse loss at the electrical and calcium level, and how detailed neuronal morphology can be integrated in large-scale network simulations. PMID:26903818

  13. Understanding Subcutaneous Tissue Pressure for Engineering Injection Devices for Large-Volume Protein Delivery.

    PubMed

    Doughty, Diane V; Clawson, Corbin Z; Lambert, William; Subramony, J Anand

    2016-07-01

    Subcutaneous injection allows for self-administration of monoclonal antibodies using prefilled syringes, autoinjectors, and on-body injector devices. However, subcutaneous injections are typically limited to 1 mL due to concerns of injection pain from volume, viscosity, and formulation characteristics. Back pressure can serve as an indicator for changes in subcutaneous mechanical properties leading to pain during injection. The purpose of this study was to investigate subcutaneous pressures and injection site reactions as a function of injection volume and flow rate. A pressure sensor in the fluid path recorded subcutaneous pressures in the abdomen of Yorkshire swine. The subcutaneous tissue accommodates large-volume injections and with little back pressure as long as low flow rates are used. A 1 mL injection in 10 seconds (360 mL/h flow rate) generated a pressure of 24.0 ± 3.4 kPa, whereas 10 mL delivered in 10 minutes (60 mL/h flow rate) generated a pressure of 7.4 ± 7.8 kPa. After the injection, the pressure decays to 0 over several seconds. The subcutaneous pressures and mechanical strain increased with increasing flow rate but not increasing dose volume. These data are useful for the design of injection devices to mitigate back pressure and pain during subcutaneous large-volume injection. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  14. Three-dimensional large eddy simulation for jet aeroacoustics

    NASA Astrophysics Data System (ADS)

    Uzun, Ali

    Future design of aircraft engines with low jet noise emissions undoubtedly needs a better understanding of noise generation in turbulent jets. Such an understanding, on the other hand, demands very reliable prediction tools. This study is therefore focused on developing a Computational Aeroacoustics (CAA) methodology that couples the near field unsteady flow field data computed by a 3-D Large Eddy Simulation (LES) code with various integral acoustic formulations for the far field noise prediction of turbulent jets. Turbulent jet simulations were performed at various Reynolds numbers. Comparisons of jet mean flow, turbulence statistics as well as jet aeroacoustics results with experimental data of jets at similar flow conditions were done and reasonable agreement was observed. The actual jet nozzle geometry was not included in the present simulations in order to keep the computational cost at manageable levels, therefore the jet shear layers downstream of the nozzle exit were modelled in an ad hoc fashion. As also observed by other researchers, the results obtained in the simulations were seen to be somewhat sensitive to the way the inflow forcing was done. The study of the effects of the eddy-viscosity based Smagorinsky subgrid-scale (SGS) model on noise predictions shows that the Smagorinsky model suppresses the resolved scale high-frequency noise. Simulations with filtering only suggest that treating the spatial filter as an implicit SGS model might be a good alternative. To our best knowledge, Lighthill's acoustic analogy was applied to a reasonably high Reynolds number jet for the first time in this study. A database greater than 1 Terabytes (TB) in size was post-processed using 1160 processors in parallel on a modern supercomputing platform for this purpose. It is expected that the current CAA methodology will yield better jet noise predictions when improved SGS models for both turbulence and high-frequency noise are incorporated into the LES code and when the

  15. Program to Optimize Simulated Trajectories (POST). Volume 1: Formulation manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    A general purpose FORTRAN program for simulating and optimizing point mass trajectories (POST) of aerospace vehicles is described. The equations and the numerical techniques used in the program are documented. Topics discussed include: coordinate systems, planet model, trajectory simulation, auxiliary calculations, and targeting and optimization.

  16. The terminal area simulation system. Volume 2: Verification cases

    NASA Technical Reports Server (NTRS)

    Proctor, F. H.

    1987-01-01

    The numerical simulation of five case studies are presented and are compared with available data in order to verify the three-dimensional version of the Terminal Area Simulation System (TASS). A spectrum of convective storm types are selected for the case studies. Included are: a High-Plains supercell hailstorm, a small and relatively short-lived High-Plains cumulonimbus, a convective storm which produced the 2 August 1985 DFW microburst, a South Florida convective complex, and a tornadic Oklahoma thunderstorm. For each of the cases the model results compared reasonably well with observed data. In the simulations of the supercell storms many of their characteristic features were modeled, such as the hook echo, BWER, mesocyclone, gust fronts, giant persistent updraft, wall cloud, flanking-line towers, anvil and radar reflectivity overhang, and rightward veering in the storm propagation. In the simulation of the tornadic storm a horseshoe-shaped updraft configuration and cyclic changes in storm intensity and structure were noted. The simulation of the DFW microburst agreed remarkably well with sparse observed data. The simulated outflow rapidly expanded in a nearly symmetrical pattern and was associated with a ringvortex. A South Florida convective complex was simulated and contained updrafts and downdrafts in the form of discrete bubbles. The numerical simulations, in all cases, always remained stable and bounded with no anomalous trends.

  17. Large scale simulation of red blood cell aggregation in shear flows.

    PubMed

    Xu, Dong; Kaliviotis, Efstathios; Munjiza, Ante; Avital, Eldad; Ji, Chunning; Williams, John

    2013-07-26

    Aggregation of highly deformable red blood cells (RBCs) significantly affects the blood flow in the human circulatory system. To investigate the effect of deformation and aggregation of RBCs in blood flow, a mathematical model has been established by coupling the interaction between the fluid and the deformable solids. The model includes a three-dimensional finite volume method solver for incompressible viscous flows, the combined finite-discrete element method for computing the deformation of the RBCs, a JKR model-Johnson, Kendall and Roberts (1964-1971) (Johnson et al., 1971) to take account of the adhesion forces between different RBCs and an iterative direct-forcing immersed boundary method to couple the fluid-solid interactions. The flow of 49,512 RBCs at 45% concentration under the influence of aggregating forces was examined, improving the existing knowledge on simulating flow and structural characteristics of blood at a large scale: previous studies on the particular issue were restricted to simulating the flow of 13,000 aggregative ellipsoidal particles at a 10% concentration. The results are in excellent agreement with experimental studies. More specifically, both the experimental and the simulation results show uniform RBC distributions under high shear rates (60-100/s) whereas large aggregation structures were observed under a lower shear rate of 10/s. The statistical analysis of the simulation data also shows that the shear rate has significant influence on both the flow velocity profiles and the frequency distribution of the RBC orientation angles.

  18. Nesting Large-Eddy Simulations Within Mesoscale Simulations for Wind Energy Applications

    NASA Astrophysics Data System (ADS)

    Lundquist, J. K.; Mirocha, J. D.; Chow, F. K.; Kosovic, B.; Lundquist, K. A.

    2008-12-01

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES) account for complex terrain and resolve individual atmospheric eddies on length scales smaller than turbine blades. These small-domain high-resolution simulations are possible with a range of commercial and open- source software, including the Weather Research and Forecasting (WRF) model. In addition to "local" sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecating model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosoviæ (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  19. Large-Eddy Simulation of Maritime Deep Tropical Convection

    NASA Astrophysics Data System (ADS)

    Khairoutdinov, Marat F.; Krueger, Steve K.; Moeng, Chin-Hoh; Bogenschutz, Peter A.; Randall, David A.

    2009-04-01

    This study represents an attempt to apply Large-Eddy Simulation (LES) resolution to simulate deep tropical convection in near equilibrium for 24 hours over an area of about 205 × 205 km2, which is comparable to that of a typical horizontal grid cell in a global climate model. The simulation is driven by large-scale thermodynamic tendencies derived from mean conditions during the GATE Phase III field experiment. The LES uses 2048 × 2048 × 256 grid points with horizontal grid spacing of 100 m and vertical grid spacing ranging from 50 m in the boundary layer to 100 m in the free troposphere. The simulation reaches a near equilibrium deep convection regime in 12 hours. The simulated vertical cloud distribution exhibits a tri-modal vertical distribution of deep, middle and shallow clouds similar to that often observed in Tropics. A sensitivity experiment in which cold pools are suppressed by switching off the evaporation of precipitation results in much lower amounts of shallow and congestus clouds. Unlike the benchmark LES where the new deep clouds tend to appear along the edges of spreading cold pools, the deep clouds in the no-cold-pool experiment tend to reappear at the sites of the previous deep clouds and tend to be surrounded by extensive areas of sporadic shallow clouds. The vertical velocity statistics of updraft and downdraft cores below 6 km height are compared to aircraft observations made during GATE. The comparison shows generally good agreement, and strongly suggests that the LES simulation can be used as a benchmark to represent the dynamics of tropical deep convection on scales ranging from large turbulent eddies to mesoscale convective systems. The effect of horizontal grid resolution is examined by running the same case with progressively larger grid sizes of 200, 400, 800, and 1600 m. These runs show a reasonable agreement with the benchmark LES in statistics such as convective available potential energy, convective inhibition, cloud fraction

  20. The large volume radiometric calorimeter system: A transportable device to measure scrap category plutonium

    SciTech Connect

    Duff, M.F.; Wetzel, J.R.; Breakall, K.L.; Lemming, J.F.

    1987-01-01

    An innovative design concept has been used to design a large volume calorimeter system. The new design permits two measuring cells to fit in a compact, nonevaporative environmental bath. The system is mounted on a cart for transportability. Samples in the power range of 0.50 to 12.0 W can be measured. The calorimeters will receive samples as large as 22.0 cm in diameter by 43.2 cm high, and smaller samples can be measured without lengthening measurement time or increasing measurement error by using specially designed sleeve adapters. This paper describes the design considerations, construction, theory, applications, and performance of the large volume calorimeter system. 2 refs., 5 figs., 1 tab.

  1. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  2. Large-Scale Numerical Simulation of Fluid Structure Interactions in Low Reynolds Number Flows

    NASA Astrophysics Data System (ADS)

    Eken, Ali; Sahin, Mehmet

    2011-11-01

    A fully coupled numerical algorithm has been developed for the numerical simulation of large-scale fluid structure interaction problems. The incompressible Navier-Stokes equations are discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the side-centered unstructured finite volume method. A special attention is given to satisfy the discrete continuity equation within each element at discrete level as well as the Geometric Conservation Law (GCL). The linear elasticity equations are discretized within the structure domain using the Galerkin finite element method. The resulting algebraic linear equations are solved in a fully coupled form using a monolitic multigrid method. The implementation of the fully coupled iterative solvers is based on the PETSc library for improving the efficiency of the parallel code. The present numerical algorithm is initially validated for a beam in cross flow and then it is used to simulate the fluid structure interaction of a membrane-wing micro aerial vehicle (MAV).

  3. BASIC Simulation Programs; Volumes III and IV. Mathematics, Physics.

    ERIC Educational Resources Information Center

    Digital Equipment Corp., Maynard, MA.

    The computer programs presented here were developed as a part of the Huntington Computer Project. They were tested on a Digital Equipment Corporation TSS-8 time-shared computer and run in a version of BASIC. Mathematics and physics programs are presented in this volume. The 20 mathematics programs include ones which review multiplication skills;…

  4. A survey of electric and hybrid vehicles simulation programs. Volume 2: Questionnaire responses

    NASA Technical Reports Server (NTRS)

    Bevan, J.; Heimburger, D. A.; Metcalfe, M. A.

    1978-01-01

    The data received in a survey conducted within the United States to determine the extent of development and capabilities of automotive performance simulation programs suitable for electric and hybrid vehicle studies are presented. The survey was conducted for the Department of Energy by NASA's Jet Propulsion Laboratory. Volume 1 of this report summarizes and discusses the results contained in Volume 2.

  5. WEST-3 wind turbine simulator development. Volume 2: Verification

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    The details of a study to validate WEST-3, a new time wind turbine simulator developed by Paragib Pacific Inc., are presented in this report. For the validation, the MOD-0 wind turbine was simulated on WEST-3. The simulation results were compared with those obtained from previous MOD-0 simulations, and with test data measured during MOD-0 operations. The study was successful in achieving the major objective of proving that WEST-3 yields results which can be used to support a wind turbine development process. The blade bending moments, peak and cyclic, from the WEST-3 simulation correlated reasonably well with the available MOD-0 data. The simulation was also able to predict the resonance phenomena observed during MOD-0 operations. Also presented in the report is a description and solution of a serious numerical instability problem encountered during the study. The problem was caused by the coupling of the rotor and the power train models. The results of the study indicate that some parts of the existing WEST-3 simulation model may have to be refined for future work; specifically, the aerodynamics and procedure used to couple the rotor model with the tower and the power train models.

  6. Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.; Bergman, R. O.; Bonstrom, D. B.; Brinkman, T. W.; Chiu, S. H. J.; Green, S. S.; Hansen, S. D.; Klein, D. L.; Krohn, H. E.; Prow, R. P.

    1979-01-01

    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation.

  7. Shuttle mission simulator requirements report, volume 1, revision C

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The contractor tasks required to produce a shuttle mission simulator for training crew members and ground personnel are discussed. The tasks will consist of the design, development, production, installation, checkout, and field support of a simulator with two separate crew stations. The tasks include the following: (1) review of spacecraft changes and incorporation of appropriate changes in simulator hardware and software design, and (2) the generation of documentation of design, configuration management, and training used by maintenance and instructor personnel after acceptance for each of the crew stations.

  8. Direct numerical simulation of scalar transport using unstructured finite-volume schemes

    NASA Astrophysics Data System (ADS)

    Rossi, Riccardo

    2009-03-01

    An unstructured finite-volume method for direct and large-eddy simulations of scalar transport in complex geometries is presented and investigated. The numerical technique is based on a three-level fully implicit time advancement scheme and central spatial interpolation operators. The scalar variable at cell faces is obtained by a symmetric central interpolation scheme, which is formally first-order accurate, or by further employing a high-order correction term which leads to formal second-order accuracy irrespective of the underlying grid. In this framework, deferred-correction and slope-limiter techniques are introduced in order to avoid numerical instabilities in the resulting algebraic transport equation. The accuracy and robustness of the code are initially evaluated by means of basic numerical experiments where the flow field is assigned a priori. A direct numerical simulation of turbulent scalar transport in a channel flow is finally performed to validate the numerical technique against a numerical dataset established by a spectral method. In spite of the linear character of the scalar transport equation, the computed statistics and spectra of the scalar field are found to be significantly affected by the spectral-properties of interpolation schemes. Although the results show an improved spectral-resolution and greater spatial-accuracy for the high-order operator in the analysis of basic scalar transport problems, the low-order central scheme is found superior for high-fidelity simulations of turbulent scalar transport.

  9. Large Eddy Simulation for Oscillating Airfoils with Large Pitching and Surging Motions

    NASA Astrophysics Data System (ADS)

    Kocher, Alexander; Cumming, Reed; Tran, Steven; Sahni, Onkar

    2016-11-01

    Many applications of interest involve unsteady aerodynamics due to time varying flow conditions (e.g. in the case of flapping wings, rotorcrafts and wind turbines). In this study, we formulate and apply large eddy simulation (LES) to investigate flow over airfoils at a moderate mean angle of attack with large pitching and surging motions. Current LES methodology entails three features: i) a combined subgrid scale model in the context of stabilized finite element methods, ii) local variational Germano identity (VGI) along with Lagrangian averaging, and iii) arbitrary Lagrangian-Eulerian (ALE) description over deforming unstructured meshes. Several cases are considered with different types of motions including surge only, pitch only and a combination of the two. The flow structures from these cases are analyzed and the numerical results are compared to experimental data when available.

  10. Shuttle mission simulator requirement report, volume 2, revision A

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The training requirements of all mission phases for crews and ground support personnel are presented. The specifications are given for the design and development of the simulator, data processing systems, engine control, software, and systems integration.

  11. Smoothed particle hydrodynamics method from a large eddy simulation perspective

    NASA Astrophysics Data System (ADS)

    Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.

    2017-03-01

    The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.

  12. Landfill gas production from large landfill simulators. Final report

    SciTech Connect

    Jones, L.W.; Larson, R.J.; Malone, P.G.

    1984-08-01

    Two sizes of landfill simulators or test cells; one set containing approximately 320 kg wet weight of municipal solid wastes (MSW) and the other set containing 2555 kg wet weight of MSW were used to measure the amount and composition of gases produced from MSW under typical landfill conditions. The relative amounts and gas compositions follow those reported by other investigators. This study demonstrates that the conditions present in the average MSW landfill are not ideal for maximum production of methane; but large quantities of methane can, nevertheless, be produced over the active decomposition period of landfilled MSW. Further studies on the effects of environmental and microbial nutritional factors on methane production in landfilled MSW are recommended.

  13. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  14. Large Eddy Simulation of FDA’s Idealized Medical Device

    PubMed Central

    Delorme, Yann T.; Anupindi, Kameswararao; Frankel, Steven H.

    2013-01-01

    A hybrid large eddy simulation (LES) and immersed boundary method (IBM) computational approach is used to make quantitative predictions of flow field statistics within the Food and Drug Administration’s (FDA) idealized medical device. An in-house code is used, hereafter (W enoHemo™), that combines high-order finite-difference schemes on structured staggered Cartesian grids with an IBM to facilitate flow over or through complex stationary or rotating geometries and employs a subgrid-scale (SGS) turbulence model that more naturally handles transitional flows [2]. Predictions of velocity and wall shear stress statistics are compared with previously published experimental measurements from Hariharan et al. [6] for the four Reynolds numbers considered. PMID:24187599

  15. Large parallel cosmic string simulations: New results on loop production

    SciTech Connect

    Blanco-Pillado, Jose J.; Olum, Ken D.; Shlaer, Benjamin

    2011-04-15

    Using a new parallel computing technique, we have run the largest cosmic string simulations ever performed. Our results confirm the existence of a long transient period where a nonscaling distribution of small loops is produced at lengths depending on the initial correlation scale. As time passes, this initial population gives way to the true scaling regime, where loops of size approximately equal to one-twentieth the horizon distance become a significant component. We observe similar behavior in matter and radiation eras, as well as in flat space. In the matter era, the scaling population of large loops becomes the dominant component; we expect this to eventually happen in the other eras as well.

  16. Arc plasma simulation of the KAERI large ion source.

    PubMed

    In, S R; Jeong, S H; Kim, T S

    2008-02-01

    The KAERI large ion source, developed for the KSTAR NBI system, recently produced ion beams of 100 keV, 50 A levels in the first half campaign of 2007. These results seem to be the best performance of the present ion source at a maximum available input power of 145 kW. A slight improvement in the ion source is certainly necessary to attain the final goal of an 8 MW ion beam. Firstly, the experimental results were analyzed to differentiate the cause and effect for the insufficient beam currents. Secondly, a zero dimensional simulation was carried out on the ion source plasma to identify which factors control the arc plasma and to find out what improvements can be expected.

  17. Computing transitional flows using wall-modeled large eddy simulation

    NASA Astrophysics Data System (ADS)

    Bodart, Julien; Larsson, Johan

    2012-11-01

    To be applicable to complex aerodynamic flows at realistic Reynolds numbers, large eddy simulation (LES) must be combined with a model for the inner part of the boundary layer. Aerodynamic flows are, in general, sensitive to the location of boundary layer transition. While traditional LES can predict the transition location and process accurately, existing wall-modeled LES approaches can not. In the present work, the behavior of the wall-model is locally adapted using a sensor in the LES-resolved part of boundary layer. This sensor estimates whether the boundary layer is turbulent or not, in a way that does not rely on any homogeneous direction. The proposed method is validated on controlled transition scenarios on a flat plat boundary layer, and finally applied to the flow around a multi-element airfoil at realistic Reynolds number.

  18. Large Eddy Simulation of FDA's Idealized Medical Device.

    PubMed

    Delorme, Yann T; Anupindi, Kameswararao; Frankel, Steven H

    2013-12-01

    A hybrid large eddy simulation (LES) and immersed boundary method (IBM) computational approach is used to make quantitative predictions of flow field statistics within the Food and Drug Administration's (FDA) idealized medical device. An in-house code is used, hereafter (W enoHemo(™) ), that combines high-order finite-difference schemes on structured staggered Cartesian grids with an IBM to facilitate flow over or through complex stationary or rotating geometries and employs a subgrid-scale (SGS) turbulence model that more naturally handles transitional flows [2]. Predictions of velocity and wall shear stress statistics are compared with previously published experimental measurements from Hariharan et al. [6] for the four Reynolds numbers considered.

  19. Large-scale ground motion simulation using GPGPU

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  20. Resonators for solid-state lasers with large-volume fundamental mode and high alignment stability

    SciTech Connect

    Magni, V.

    1986-01-01

    Resonators containing a focusing rod are thoroughly analyzed. It is shown that, as a function of the dioptric power of the rod, two stability zones of the same width exist and that the mode volume in the rod always presents a stationary point. At this point, the output power is insensitive to the focal length fluctuations, and the mode volume inside the rod is inversely proportional to the range of the input power for which the resonator is stable. The two zones are markedly different with respect to misalignment sensitivity, which is, in general, much greater in one zone than in the other. Two design procedures are presented for monomode solid-state laser resonators with large mode volume and low sensitivity both to focal length fluctuations and to misalignment.

  1. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    SciTech Connect

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.

  2. Background simulations for the Large Area Detector onboard LOFT

    NASA Astrophysics Data System (ADS)

    Campana, Riccardo; Feroci, Marco; Del Monte, Ettore; Mineo, Teresa; Lund, Niels; Fraser, George W.

    2013-12-01

    The Large Observatory For X-ray Timing (LOFT), currently in an assessment phase in the framework the ESA M3 Cosmic Vision programme, is an innovative medium-class mission specifically designed to answer fundamental questions about the behaviour of matter, in the very strong gravitational and magnetic fields around compact objects and in supranuclear density conditions. Having an effective area of ˜10 m2 at 8 keV, LOFT will be able to measure with high sensitivity very fast variability in the X-ray fluxes and spectra. A good knowledge of the in-orbit background environment is essential to assess the scientific performance of the mission and optimize the design of its main instrument, the Large Area Detector (LAD). In this paper the results of an extensive Geant-4 simulation of the instrumentwillbe discussed, showing the main contributions to the background and the design solutions for its reduction and control. Our results show that the current LOFT/LAD design is expected to meet its scientific requirement of a background rate equivalent to 10 mCrab in 2‒30 keV, achieving about 5 mCrab in the most important 2-10 keV energy band. Moreover, simulations show an anticipated modulation of the background rate as small as 10 % over the orbital timescale. The intrinsic photonic origin of the largest background component also allows for an efficient modelling, supported by an in-flight active monitoring, allowing to predict systematic residuals significantly better than the requirement of 1 %, and actually meeting the 0.25 % science goal.

  3. Large-eddy simulation of combustion dynamics in swirling flows

    NASA Astrophysics Data System (ADS)

    Stone, Christopher Pritchard

    The impact of premixer swirl number, S, and overall fuel equivalence ratio, phi, on the stability of a model swirl-stabilized, lean-premixed gas turbine combustor has been numerically investigated using a massively-parallel Large-Eddy Simulations Combustion Dynamics model. Through the use of a premixed combustion model, unsteady vortex-flame and acoustic-flame interactions are captured. It is observed that for flows with swirl intensity high enough to form Vortex-Breakdown (i.e., a phenomena associated with a large region of reverse or recirculating flow along the axis of rotation), the measured rms pressure amplitude (p') are attenuated significantly (over 6.6 dB reduction) compared to flows without this phenomena. The reduced p' amplitudes are accompanied by reduced longitudinal flame-front oscillations and reduced coherence in the shed vortices. Similar p' reduction levels are achieved through changes in the operating equivalence ratio, phi. Compared to the leanest equivalence ratio simulated (phi = 0.52), p' at a stoichiometric mixture is reduced by 6.0 dB. Methodologies for active control based on modulation of the inlet Swirl number (S, a measure of the intensity of swirl) and phi are also investigated. Open-loop control through S variation is demonstrated for a lean mixture with a significant reduction in the fluctuating mass-flow-rate and p' after a convective time-delay. A partially-premixed combustion model, which allows for variations in the local phi, is used to model both temporal and spatial variations in phi. It is found that the response time to changes in phi are much faster than those for changes in S. Also, it is shown that spatial variations in phi (or unmixedness) actually lead to p' attenuation in the current combustor configuration.

  4. Large Eddy Simulation of a Turbulent Buoyant Plume

    NASA Astrophysics Data System (ADS)

    Desjardin, Paul E.

    1999-11-01

    Large Eddy Simulations of a helium-air turbulent plume are conducted in order to investigate the buoyancy induced vorticity production mechanisms of this flow. The inlet condition of the plume consists of a low velocity (0.35 m/sec) 1m diameter helium jet emitting upwards into air. This flow configuration is chosen to best match the experimental conditions of the non-reacting helium plume experiments taken at Sandia's FLAME facility. The compressible form of the Favre filtered Navier Stokes, species and energy equations are closed using localized dynamic Smagorinsky subgrid models . Numerical integration is performed using AUSM+ flux vector splitting that employs fifth order upwind biased interpolating stencils and advanced in time using second order Runge-Kutta along with pressure gradient scaling for improved temporal stability. The code uses MPI domain decomposition and is run on Sandia's ASCI red massively parallel computer. Results from the simulations highlight the buoyancy induced vorticity generation and entrainment properties of these flows and the effect of filter width on subgrid modeling. Comparisons to experimental data will be made whenever possible.

  5. Large-eddy simulation formulation and implementation in HYDRA

    SciTech Connect

    McCallen, R.

    1995-12-05

    This report provides the equation formulation for a large-eddy simulation (LES) approach and Smagorinsky subgrid-scale (SGS) model for incompressible flow using the finite element method (FEM). This report also outlines the model implementation in the computer code HYDRA and the results of a coding check. The check was accomplished by running simple two- and three-element problems for a specified velocity field. The values of the eddy viscosity (the coefficient of proportionality in the SGS eddy diffusion model), the SGS diffusion term, and overall diffusion term (molecular plus SGS plus balancing tensor diffusivity) were compared to known hand-calculated values. Coding checks are best done by comparing the code-calculated solution to known analytical solutions. However, with LES turbulence modeling, these analytical solutions do not exist. It is also impossible to determine that the eddy viscosity is free of coding errors when performing code validation by comparing the LES to direct numerical simulations (DNS) (i.e., fine discretization with no turbulence model) or experimental results. Therefore, the coding checks presented here for a specified velocity field are necessary.

  6. Large eddy simulation for aerodynamics: status and perspectives.

    PubMed

    Sagaut, Pierre; Deck, Sébastien

    2009-07-28

    The present paper provides an up-to-date survey of the use of large eddy simulation (LES) and sequels for engineering applications related to aerodynamics. Most recent landmark achievements are presented. Two categories of problem may be distinguished whether the location of separation is triggered by the geometry or not. In the first case, LES can be considered as a mature technique and recent hybrid Reynolds-averaged Navier-Stokes (RANS)-LES methods do not allow for a significant increase in terms of geometrical complexity and/or Reynolds number with respect to classical LES. When attached boundary layers have a significant impact on the global flow dynamics, the use of hybrid RANS-LES remains the principal strategy to reduce computational cost compared to LES. Another striking observation is that the level of validation is most of the time restricted to time-averaged global quantities, a detailed analysis of the flow unsteadiness being missing. Therefore, a clear need for detailed validation in the near future is identified. To this end, new issues, such as uncertainty and error quantification and modelling, will be of major importance. First results dealing with uncertainty modelling in unsteady turbulent flow simulation are presented.

  7. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  8. Computer simulation of reflective volume grating holographic data storage.

    PubMed

    Gombköt, Balázs; Koppa, Pál; Süt, Attila; L Rincz, Em Ke

    2007-07-01

    The shift selectivity of a reflective-type spherical reference wave volume hologram is investigated using a nonparaxial numerical modeling based on a multiple-thin-layer implementation of a volume integral equation. The method can be easily parallelized on multiple computers. According to the results, the falloff of the diffraction efficiency due to the readout shift shows neither Bragg zeros nor oscillation with our parameter set. This agrees with our earlier study of smaller and transmissive holograms. Interhologram cross talk of shift-multiplexed holograms is also modeled using the same method, together with sparse modulation block coding and correlation decoding of data. Signal-to-noise ratio and raw bit error rate values are calculated.

  9. The BAHAMAS project: calibrated hydrodynamical simulations for large-scale structure cosmology

    NASA Astrophysics Data System (ADS)

    McCarthy, Ian G.; Schaye, Joop; Bird, Simeon; Le Brun, Amandine M. C.

    2017-03-01

    The evolution of the large-scale distribution of matter is sensitive to a variety of fundamental parameters that characterize the dark matter, dark energy, and other aspects of our cosmological framework. Since the majority of the mass density is in the form of dark matter that cannot be directly observed, to do cosmology with large-scale structure, one must use observable (baryonic) quantities that trace the underlying matter distribution in a (hopefully) predictable way. However, recent numerical studies have demonstrated that the mapping between observable and total mass, as well as the total mass itself, are sensitive to unresolved feedback processes associated with galaxy formation, motivating explicit calibration of the feedback efficiencies. Here, we construct a new suite of large-volume cosmological hydrodynamical simulations (called BAHAMAS, for BAryons and HAloes of MAssive Systems), where subgrid models of stellar and active galactic nucleus feedback have been calibrated to reproduce the present-day galaxy stellar mass function and the hot gas mass fractions of groups and clusters in order to ensure the effects of feedback on the overall matter distribution are broadly correct. We show that the calibrated simulations reproduce an unprecedentedly wide range of properties of massive systems, including the various observed mappings between galaxies, hot gas, total mass, and black holes, and represent a significant advance in our ability to mitigate the primary systematic uncertainty in most present large-scale structure tests.

  10. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  11. Parameter studies on the energy balance closure problem using large-eddy simulation

    NASA Astrophysics Data System (ADS)

    De Roo, Frederik; Banerjee, Tirtha; Mauder, Matthias

    2017-04-01

    The imbalance of the surface energy budget in eddy-covariance measurements is still a pending problem. A possible cause is the presence of land surface heterogeneity. Heterogeneities of the boundary layer scale or larger are most effective in influencing the boundary layer turbulence, and large-eddy simulations have shown that secondary circulations within the boundary layer can affect the surface energy budget. However, the precise influence of the surface characteristics on the energy imbalance and its partitioning is still unknown. To investigate the influence of surface variables on all the components of the flux budget under convective conditions, we set up a systematic parameter study by means of large-eddy simulation. For the study we use a virtual control volume approach, and we focus on idealized heterogeneity by considering spatially variable surface fluxes. The surface fluxes vary locally in intensity and these patches have different length scales. The main focus lies on heterogeneities of length scales of the kilometer scale and one decade smaller. For each simulation, virtual measurement towers are positioned at functionally different positions. We discriminate between the locally homogeneous towers, located within land use patches, with respect to the more heterogeneous towers, and find, among others, that the flux-divergence and the advection are strongly linearly related within each class. Furthermore, we seek correlators for the energy balance ratio and the energy residual in the simulations. Besides the expected correlation with measurable atmospheric quantities such as the friction velocity, boundary-layer depth and temperature and moisture gradients, we have also found an unexpected correlation with the temperature difference between sonic temperature and surface temperature. In additional simulations with a large number of virtual towers, we investigate higher order correlations, which can be linked to secondary circulations. In a companion

  12. Constitutive modeling of large inelastic deformation of amorphous polymers: Free volume and shear transformation zone dynamics

    NASA Astrophysics Data System (ADS)

    Voyiadjis, George Z.; Samadi-Dooki, Aref

    2016-06-01

    Due to the lack of the long-range order in their molecular structure, amorphous polymers possess a considerable free volume content in their inter-molecular space. During finite deformation, these free volume holes serve as the potential sites for localized permanent plastic deformation inclusions which are called shear transformation zones (STZs). While the free volume content has been experimentally shown to increase during the course of plastic straining in glassy polymers, thermal analysis of stored energy due to the deformation shows that the STZ nucleation energy decreases at large plastic strains. The evolution of the free volume, and the STZs number density and nucleation energy during the finite straining are formulated in this paper in order to investigate the uniaxial post-yield softening-hardening behavior of the glassy polymers. This study shows that the reduction of the STZ nucleation energy, which is correlated with the free volume increase, brings about the post-yield primary softening of the amorphous polymers up to the steady-state strain value; and the secondary hardening is a result of the increased number density of the STZs, which is required for large plastic strains, while their nucleation energy is stabilized beyond the steady-state strain. The evolutions of the free volume content and STZ nucleation energy are also used to demonstrate the effect of the strain rate, temperature, and thermal history of the sample on its post-yield behavior. The obtained results from the model are compared with the experimental observations on poly(methyl methacrylate) which show a satisfactory consonance.

  13. The two axis motion simulator for the large space simulator at ESTEC (European Space Research and Technology Center)

    NASA Technical Reports Server (NTRS)

    Beckel, Kurt A.; Hutchison, Joop

    1988-01-01

    The Large Space Simulator at the European Space Research and Technology Center (ESTEC) has been recently equipped with a motion simulator capable of handling test items of 5 tons mass and having a volume of 7m in diameter and a length of 7m. The motion simulator has a modular set-up. It consists of a spinbox as a basic unit on which the test article is mounted and which allows continuous rotation (spin) . This spinbox can be used in two operational configurations; the spin axis is vertical to 30 degrees when mounted on a gimbalstand; and the spin axis is horizontal when mounted on a turntable-yoke combination. The turntable provides rotation within plus or minus 90 degrees. This configuration allows one to bring a test article to all possible relative positions viv-a-vis the sun vector (which is horizontal in this case). The spinbox allows fast rotation between 1 to 6 rpm or slow rotation between 1 to 25 rotations per day as well as positioning within plus or minus 0.4 degrees accuracy.

  14. Large eddy simulation modelling of combustion for propulsion applications.

    PubMed

    Fureby, C

    2009-07-28

    Predictive modelling of turbulent combustion is important for the development of air-breathing engines, internal combustion engines, furnaces and for power generation. Significant advances in modelling non-reactive turbulent flows are now possible with the development of large eddy simulation (LES), in which the large energetic scales of the flow are resolved on the grid while modelling the effects of the small scales. Here, we discuss the use of combustion LES in predictive modelling of propulsion applications such as gas turbine, ramjet and scramjet engines. The LES models used are described in some detail and are validated against laboratory data-of which results from two cases are presented. These validated LES models are then applied to an annular multi-burner gas turbine combustor and a simplified scramjet combustor, for which some additional experimental data are available. For these cases, good agreement with the available reference data is obtained, and the LES predictions are used to elucidate the flow physics in such devices to further enhance our knowledge of these propulsion systems. Particular attention is focused on the influence of the combustion chemistry, turbulence-chemistry interaction, self-ignition, flame holding burner-to-burner interactions and combustion oscillations.

  15. Large-eddy simulation of flow past a circular cylinder

    NASA Astrophysics Data System (ADS)

    Cheng, Wan; Pullin, Dale; Samtaney, Ravi; Zhang, Wei

    2015-11-01

    Wall-modeled, large-eddy simulations (LES) about a smooth-walled circular cylinder are described. The cylinder is of diameter D and is of extent 3 D in the span-wise direction. The stretched-vortex sub-grid scale model is used away from the cylinder wall, including regions of large-scale separated flow. At the wall this is coupled directly to an extended version of the virtual-wall model (VWM) of Chung & Pullin (2009). Here the wall-adjacent flow is modeled by wall-normal integration of both components of the wall-parallel momentum equation across a thin wall-layer whose thickness is small compared to that of the local boundary layer. This provides a wall-parallel, cell-scale estimate of the surface stress-vector field across the entire cylinder surface, and, with further assumptions, gives a slip-velocity boundary condition for the outer-flow LES. Flow separation is captured. The LES are done with a fourth-order accurate finite-difference method with span-wise periodic boundary conditions. A third-order semi-implicit Runge-Kutta method is used for temporal discretization. The LES methodology is verified by comparison with DNS at ReD = 3 , 900 . LES at larger Reynolds number will be discussed. Supported partially by KAUST OCRF Award No. URF/1/1394-01 and partially by NSF award CBET 1235605.

  16. Simulation and experiment for large scale space structure

    NASA Astrophysics Data System (ADS)

    Sun, Hongbo; Zhou, Jian; Zha, Zuoliang

    2013-04-01

    The future space structures are relatively large, flimsy, and lightweight. As a result, they are more easily affected or distortion by space environments compared to other space structures. This study examines the structural integrity of a large scale space structure. A new design of transient temperature field analysis method of the developable reflector on orbit environment is presented, which simulates physical characteristic of developable antenna reflector with a high precision. The different kinds of analysis denote that different thermal elastic characteristics of different materials. The three-dimension multi-physics coupling transient thermal distortion equations for the antenna are founded based on the Galerkins method. For a reflector on geosynchronous orbit, the transient temperature field results from this method are compared with these from NASA. It follows from the analysis that the precision of this method is high. An experimental system is established to verify the control mechanism with IEBIS and thermal sensor technique. The shape control experiments are finished by measuring and analyzing developable tube. Results reveal that the temperature levels of the developable antenna reflector alternate greatly in the orbital period, which is about ±120° when considering solar flux ,earth radiating flux and albedo scattering flux.

  17. Large eddy simulations of in-cylinder turbulent flows.

    NASA Astrophysics Data System (ADS)

    Banaeizadeh, Araz; Afshari, Asghar; Schock, Harold; Jaberi, Farhad

    2007-11-01

    A high-order numerical model is developed and tested for large eddy simulation (LES) of turbulent flows in internal combustion (IC) engines. In this model, the filtered compressible Navier-Stokes equations in curvilinear coordinate systems are solved via a generalized high-order multi-block compact differencing scheme. The LES model has been applied to three flow configurations: (1) a fixed poppet valve in a sudden expansion, (2) a simple piston-cylinder assembly with a stationary open valve and harmonically moving flat piston, (3) a laboratory single-cylinder engine with three moving intake and exhaust valves. The first flow configuration is considered for studying the flow around the valves in IC engines. The second flow configuration is closer to that in IC engines but is based on a single stationary intake/exhaust valve and relatively simple geometry. It is considered in this work for better understating of the effects of moving piston on the large-scale unsteady vortical fluid motions in the cylinder and for further validation of our LES model. The third flow configuration includes all the complexities involve in a realistic single-cylinder IC engine. The predicted flow statistics by LES show good comparison with the available experimental data.

  18. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not

  19. Geophysics Under Pressure: Large-Volume Presses Versus the Diamond-Anvil Cell

    NASA Astrophysics Data System (ADS)

    Hazen, R. M.

    2002-05-01

    Prior to 1970, the legacy of Harvard physicist Percy Bridgman dominated high-pressure geophysics. Massive presses with large-volume devices, including piston-cylinder, opposed-anvil, and multi-anvil configurations, were widely used in both science and industry to achieve a range of crustal and upper mantle temperatures and pressures. George Kennedy of UCLA was a particularly influential advocate of large-volume apparatus for geophysical research prior to his death in 1980. The high-pressure scene began to change in 1959 with the invention of the diamond-anvil cell, which was designed simultaneously and independently by John Jamieson at the University of Chicago and Alvin Van Valkenburg at the National Bureau of Standards in Washington, DC. The compact, inexpensive diamond cell achieved record static pressures and had the advantage of optical access to the high-pressure environment. Nevertheless, members of the geophysical community, who favored the substantial sample volumes, geothermally relevant temperature range, and satisfying bulk of large-volume presses, initially viewed the diamond cell with indifference or even contempt. Several factors led to a gradual shift in emphasis from large-volume presses to diamond-anvil cells in geophysical research during the 1960s and 1970s. These factors include (1) their relatively low cost at time of fiscal restraint, (2) Alvin Van Valkenburg's new position as a Program Director at the National Science Foundation in 1964 (when George Kennedy's proposal for a Nation High-Pressure Laboratory was rejected), (3) the development of lasers and micro-analytical spectroscopic techniques suitable for analyzing samples in a diamond cell, and (4) the attainment of record pressures (e.g., 100 GPa in 1975 by Mao and Bell at the Geophysical Laboratory). Today, a more balanced collaborative approach has been adopted by the geophysics and mineral physics community. Many high-pressure laboratories operate a new generation of less expensive

  20. An efficient parallel algebraic multigrid method for 3D injection moulding simulation based on finite volume method

    NASA Astrophysics Data System (ADS)

    Hu, Zixiang; Zhang, Yun; Liang, Junjie; Shi, Songxin; Zhou, Huamin

    2014-07-01

    Elapsed time is always one of the most important performance measures for polymer injection moulding simulation. Solving pressure correction equations is the most time-consuming part in the mould filling simulation using finite volume method with SIMPLE-like algorithms. Algebraic multigrid (AMG) is one of the most promising methods for this type of elliptic equations. It, thus, has better performance by contrast with some common one-level iterative methods, especially for large problems. And it is also suitable for parallel computing. However, AMG is not easy to be applied due to its complex theory and poor generality for the large range of computational fluid dynamics applications. This paper gives a robust and efficient parallel AMG solver, A1-pAMG, for 3D mould filling simulation of injection moulding. Numerical experiments demonstrate that, A1-pAMG has better parallel performance than the classical AMG, and also has algorithmic scalability in the context of 3D unstructured problems.

  1. Large-aperture chirped volume Bragg grating based fiber CPA system.

    PubMed

    Liao, Kai-Hsiu; Cheng, Ming-Yuan; Flecher, Emilie; Smirnov, Vadim I; Glebov, Leonid B; Galvanauskas, Almantas

    2007-04-16

    A fiber chirped pulse amplification system at 1558 nm was demonstrated using a large-aperture volume Bragg grating stretcher and compressor made of Photo-Thermal-Refractive (PTR) glass. Such PTR glass based gratings represent a new type of pulse stretching and compressing devices which are compact, monolithic and optically efficient. Furthermore, since PTR glass technology enables volume gratings with transverse apertures which are large, homogeneous and scalable, it also enables high pulse energies and powers far exceeding those achievable with other existing compact pulse-compression technologies. Additionally, reciprocity of chirped gratings with respect to stretching and compression also enables to address a long-standing problem in CPA system design of stretcher-compressor dispersion mismatch.

  2. Large-aperture chirped volume Bragg grating based fiber CPA system

    NASA Astrophysics Data System (ADS)

    Liao, Kai-Hsiu; Cheng, Ming-Yuan; Flecher, Emilie; Smirnov, Vadim I.; Glebov, Leonid B.; Galvanauskas, Almantas

    2007-04-01

    A fiber chirped pulse amplification system at 1558nm was demonstrated using a large-aperture volume Bragg grating stretcher and compressor made of Photo-Thermal-Refractive (PTR) glass. Such PTR glass based gratings represent a new type of pulse stretching and compressing devices which are compact, monolithic and optically efficient. Furthermore, since PTR glass technology enables volume gratings with transverse apertures which are large, homogeneous and scalable, it also enables high pulse energies and powers far exceeding those achievable with other existing compact pulse-compression technologies. Additionally, reciprocity of chirped gratings with respect to stretching and compression also enables to address a long-standing problem in CPA system design of stretcher-compressor dispersion mismatch.

  3. Assembly, operation and disassembly manual for the Battelle Large Volume Water Sampler (BLVWS)

    SciTech Connect

    Thomas, V.W.; Campbell, R.M.

    1984-12-01

    Assembly, operation and disassembly of the Battelle Large Volume Water Sampler (BLVWS) are described in detail. Step by step instructions of assembly, general operation and disassembly are provided to allow an operator completely unfamiliar with the sampler to successfully apply the BLVWS to his research sampling needs. The sampler permits concentration of both particulate and dissolved radionuclides from large volumes of ocean and fresh water. The water sample passes through a filtration section for particle removal then through sorption or ion exchange beds where species of interest are removed. The sampler components which contact the water being sampled are constructed of polyvinylchloride (PVC). The sampler has been successfully applied to many sampling needs over the past fifteen years. 9 references, 8 figures.

  4. HYBRID BRIDGMAN ANVIL DESIGN: AN OPTICAL WINDOW FOR IN-SITU SPECTROSCOPY IN LARGE VOLUME PRESSES

    SciTech Connect

    Lipp, M J; Evans, W J; Yoo, C S

    2005-07-29

    The absence of in-situ optical probes for large volume presses often limits their application to high-pressure materials research. In this paper, we present a unique anvil/optical window-design for use in large volume presses, which consists of an inverted diamond anvil seated in a Bridgman type anvil. A small cylindrical aperture through the Bridgman anvil ending at the back of diamond anvil allows optical access to the sample chamber and permits direct optical spectroscopy measurements, such as ruby fluorescence (in-situ pressure) or Raman spectroscopy. This performance of this anvil-design has been demonstrated by loading KBr to a pressure of 14.5 GPa.

  5. The large volume calorimeter for measuring the pressure cooker'' shipping container

    SciTech Connect

    Kasperski, P.W.; Duff, M.F.; Wetzel, J.R. ); Baker, L.B.; MacMurdo, K.W. )

    1991-01-01

    A precise, low wattage, large volume calorimeter system has been developed at Mound to measure two configurations of the 12081 containment vessel. This system was developed and constructed to perform verification measurements at the Savannah River Site. The calorimeter system has performance design specifications of {plus minus}0.3% error above the 2-watt level, and {plus minus}(0.03% plus 0.006 watts) at power levels below 2 watts (one sigma). Data collected during performance testing shows measurement errors well within this range, even down to 0.1-watt power levels. The development of this calorimeter shows that ultra-precise measurements can be achieved on extremely large volume sample configurations. 1 ref., 5 figs.

  6. Large eddy simulations of flow past a cubic obstacle

    NASA Astrophysics Data System (ADS)

    Shah, Kishan B.

    Turbulent flow over three dimensional obstacles is common in engineering and understanding of them is necessary to engineering design. This work is an effort to provide quantitative data on flows past three-dimensional bodies through large eddy simulations (LES). The flow over a cube mounted on a wall of a plane channel is studied. This flow exhibits characteristics common to this class of flows, including the three dimensionality of the mean flow, separation, and large scale unsteadiness. The goals of our study are to develop subgrid scale (SGS) models suitable for complex flows, to perform LES of these flows, and to use the results to aid in the identification of dynamically significant large scale structures. Included in this investigation are comparison of several SGS models, including a new model, and a study of some of the mean and unsteady characteristics of the flow. Cube flow LES were performed at both low and high Reynolds number (Reb = 3000 and 40000). The quality of results was verified by comparing them to an experiment. All the large scale features seen in the experiment are reproduced and the flow patterns are consistent with kinematic constraints. The mean flow is characterized by a strong horseshoe vortex upstream of the body, an arch vortex behind the body and vortices on the roof and the sides. The recirculation region above the roof has the strongest turbulence kinetic energy while the arch vortex has the largest turbulent shear stresses. Although the unsteady behavior is quite complicated, there are large-scale events that occur roughly periodically such as the motion of the horseshoe vortex and the vortices shed from the roof and the sides. Bimodal PDFs are typical of the region upstream of the obstacle, close to the wall. These are due to the existence of two distinct states of the flow. The dominant characteristic behind the obstacle is the quasi- periodic, alternate vortex shedding from the sides and the roof. The shedding frequency from

  7. Rapid Adaptive Optical Recovery of Optimal Resolution over LargeVolumes

    PubMed Central

    Wang, Kai; Milkie, Dan; Saxena, Ankur; Engerer, Peter; Misgeld, Thomas; Bronner, Marianne E.; Mumm, Jeff; Betzig, Eric

    2014-01-01

    Using a de-scanned, laser-induced guide star and direct wavefront sensing, we demonstrate adaptive correction of complex optical aberrations at high numerical aperture and a 14 ms update rate. This permits us to compensate for the rapid spatial variation in aberration often encountered in biological specimens, and recover diffraction-limited imaging over large (> 240 μm)3 volumes. We applied this to image fine neuronal processes and subcellular dynamics within the zebrafish brain. PMID:24727653

  8. Technical note: rapid, large-volume resuscitation at resuscitative thoracotomy by intra-cardiac catheterization

    PubMed Central

    Cawich, Shamir O; Naraynsingh, Vijay

    2016-01-01

    An emergency thoracotomy may be life-saving by achieving four goals: (i) releasing cardiac tamponade, (ii) controlling haemorrhage, (iii) allowing access for internal cardiac massage and (iv) clamping the descending aorta to isolate circulation to the upper torso in damage control surgery. We theorize that a new goal should be achieving rapid, large-volume fluid resuscitation and we describe a technique to achieve this. PMID:27887010

  9. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.

    1991-01-01

    This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.

  10. Scanning laser optical computed tomography system for large volume 3D dosimetry

    NASA Astrophysics Data System (ADS)

    Dekker, Kurtis H.; Battista, Jerry J.; Jordan, Kevin J.

    2017-04-01

    Stray light causes artifacts in optical computed tomography (CT) that negatively affect the accuracy of radiation dosimetry in gels or solids. Scatter effects are exacerbated by a large dosimeter volume, which is desirable for direct verification of modern radiotherapy treatment plans such as multiple-isocenter radiosurgery. The goal in this study was to design and characterize an optical CT system that achieves high accuracy primary transmission measurements through effective stray light rejection, while maintaining sufficient scan speed for practical application. We present an optical imaging platform that uses a galvanometer mirror for horizontal scanning, and a translation stage for vertical movement of a laser beam and small area detector for minimal stray light production and acceptance. This is coupled with a custom lens-shaped optical CT aquarium for parallel ray sampling of projections. The scanner images 15 cm diameter, 12 cm height cylindrical volumes at 0.33 mm resolution in approximately 30 min. Attenuation coefficients reconstructed from CT scans agreed with independent cuvette measurements within 2% for both absorbing and scattering solutions as well as small 1.25 cm diameter absorbing phantoms placed within a large, scattering medium that mimics gel. Excellent linearity between the optical CT scanner and the independent measurement was observed for solutions with between 90% and 2% transmission. These results indicate that the scanner should achieve highly accurate dosimetry of large volume dosimeters in a reasonable timeframe for clinical application to radiotherapy dose verification procedures.

  11. Scanning laser optical computed tomography system for large volume 3D dosimetry.

    PubMed

    Dekker, Kurtis H; Battista, Jerry J; Jordan, Kevin J

    2017-04-07

    Stray light causes artifacts in optical computed tomography (CT) that negatively affect the accuracy of radiation dosimetry in gels or solids. Scatter effects are exacerbated by a large dosimeter volume, which is desirable for direct verification of modern radiotherapy treatment plans such as multiple-isocenter radiosurgery. The goal in this study was to design and characterize an optical CT system that achieves high accuracy primary transmission measurements through effective stray light rejection, while maintaining sufficient scan speed for practical application. We present an optical imaging platform that uses a galvanometer mirror for horizontal scanning, and a translation stage for vertical movement of a laser beam and small area detector for minimal stray light production and acceptance. This is coupled with a custom lens-shaped optical CT aquarium for parallel ray sampling of projections. The scanner images 15 cm diameter, 12 cm height cylindrical volumes at 0.33 mm resolution in approximately 30 min. Attenuation coefficients reconstructed from CT scans agreed with independent cuvette measurements within 2% for both absorbing and scattering solutions as well as small 1.25 cm diameter absorbing phantoms placed within a large, scattering medium that mimics gel. Excellent linearity between the optical CT scanner and the independent measurement was observed for solutions with between 90% and 2% transmission. These results indicate that the scanner should achieve highly accurate dosimetry of large volume dosimeters in a reasonable timeframe for clinical application to radiotherapy dose verification procedures.

  12. 3D cell-printing of large-volume tissues: Application to ear regeneration.

    PubMed

    Lee, Jung-Seob; Kim, Byung Soo; Seo, Dong Hwan; Park, Jeong Hun; Cho, Dong-Woo

    2017-01-17

    The three-dimensional (3D) printing of large-volume cells, printed in a clinically relevant size, is one of the most important challenges in the field of tissue engineering. However, few studies have reported the fabrication of large-volume cell-printed constructs (LCCs). To create LCCs, appropriate fabrication conditions should be established: factors involved include fabrication time, residence time, and temperature control of the cell-laden hydrogel in the syringe to ensure high cell viability and functionality. The prolonged time required for 3D printing of LCCs can reduce cell viability and result in insufficient functionality of the construct, because the cells are exposed to a harsh environment during the printing process. In this regard, we present an advanced 3D cell-printing system composed of a clean air workstation, humidifier, and Peltier system, which provides a suitable printing environment for production of LCCs with high cell viability. We confirmed that the advanced 3D cell-printing system was capable of providing enhanced printability of hydrogels and fabricating an ear-shaped LCC with high cell viability. In vivo results for the ear-shaped LCC also showed that printed chondrocytes proliferated sufficiently and differentiated into cartilage tissue. Thus, we conclude that the advanced 3D cell-printing system is a versatile tool to create cell-printed constructs for the generation of large-volume tissues.

  13. Large N_c volume reduction and chiral random matrix theory

    NASA Astrophysics Data System (ADS)

    Lee, J. W.; Hanada, M.; Yamada, N.

    Motivated by recent progress on the understanding of the Eguchi-Kawai (EK) volume equivalence and growing interest in conformal window, we simultaneously use the large-Nc volume reduction and Chiral Random Matrix Theory (chRMT) to study the chiral symmetry breaking of four dimensional SU(Nc) gauge theory with adjoint fermions in the large Nc limit. Although some cares are required because the chRMT limit and 't Hooft limit are not compatible in general, we show that the breakdown of the chiral symmetry can be detected in large-Nc gauge theories. As a first step, we mainly focus on the quenched approximation to establish the methodology. We first confirm that heavy adjoint fermions, introduced as the center symmetry preserver, work as expected and thanks to them the volume reduction holds. Using massless overlap fermion as a probe, we then calculate the low-lying Dirac spectrum for fermion in the adjoint representation to compare to that of chRMT, and find that chiral symmetry is indeed broken in the quenched theory.

  14. Shuttle mission simulator baseline definition report, volume 2

    NASA Technical Reports Server (NTRS)

    Dahlberg, A. W.; Small, D. E.

    1973-01-01

    The baseline definition report for the space shuttle mission simulator is presented. The subjects discussed are: (1) the general configurations, (2) motion base crew station, (3) instructor operator station complex, (4) display devices, (5) electromagnetic compatibility, (6) external interface equipment, (7) data conversion equipment, (8) fixed base crew station equipment, and (9) computer complex. Block diagrams of the supporting subsystems are provided.

  15. Analytical simulation of SPS system performance, volume 3, phase 3

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.; Lindsey, W. C.

    1980-01-01

    The simulation model for the Solar Power Satellite spaceantenna and the associated system imperfections are described. Overall power transfer efficiency, the key performance issue, is discussed as a function of the system imperfections. Other system performance measures discussed include average power pattern, mean beam gain reduction, and pointing error.

  16. Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual

    NASA Technical Reports Server (NTRS)

    Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.

  17. Shuttle mission simulator requirements report, volume 1, revision A

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The tasks are defined required to design, develop produce, and field support a shuttle mission simulator for training crew members and ground support personnel. The requirements for program management, control, systems engineering, design and development are discussed along with the design and construction standards, software design, control and display, communication and tracking, and systems integration.

  18. RSRM top hat cover simulator lightning test, volume 1

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The test sequence was to measure electric and magnetic fields induced inside a redesigned solid rocket motor case when a simulated lightning discharge strikes an exposed top hat cover simulator. The test sequence was conducted between 21 June and 17 July 1990. Thirty-six high rate-of-rise Marx generator discharges and eight high current bank discharges were injected onto three different test article configurations. Attach points included three locations on the top hat cover simulator and two locations on the mounting bolts. Top hat cover simulator and mounting bolt damage and grain cover damage was observed. Overall electric field levels were well below 30 kilowatts/meter. Electric field levels ranged from 184.7 to 345.9 volts/meter and magnetic field levels were calculated from 6.921 to 39.73 amperes/meter. It is recommended that the redesigned solid rocket motor top hat cover be used in Configuration 1 or Configuration 2 as an interim lightning protection device until a lightweight cover can be designed.

  19. STAGE 64: SIMULATOR PROGRAMMING SPECIFICATIONS MANUAL. VOLUME III. DAMAGE.

    DTIC Science & Technology

    The Damage package of the STAGE Simulator is a group of six complexes which under normal running conditions assess damage to the following five types...preliminary control routine. Under nonoptimal running conditions, the damage assessment is made by the five complexes at the end of each time period during which ground zero has occurred.

  20. Program to Optimize Simulated Trajectories (POST). Volume 3: Programmer's manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to the programmer and relating to the program to optimize simulated trajectories (POST) is presented. Topics discussed include: program structure and logic, subroutine listings and flow charts, and internal FORTRAN symbols. The POST core requirements are summarized along with program macrologic.

  1. Survey of Models/Simulations at RADC. Volume 1.

    DTIC Science & Technology

    1982-11-01

    WIRELINE SIMULATOR Section Page III SURVEY RESULTS SUMMARY TABLE ....................... 46 IV SUMMARY AND RECOMMENDATIONS ....................... 54...developers with a laboratory tool to assist them in developing and testing countermeasure concepts and equipment to be used against enemy C3 systems...development, operational enhancement, configuration, and/or reconfiguration. C3SAM is a tool designed to enable individuals and groups to define

  2. Large Eddy Simulation of Reacting Multiphase Flows in Complex Combustor Geometries

    NASA Astrophysics Data System (ADS)

    Apte, S.; Mahesh, K.; Iaccarino, G.; Constantinescu, G.; Ham, F.; Moin, P.

    2003-11-01

    We have developed a massively parallel computational tool (CDP) for large-eddy simulation (LES) of reacting multiphase flows in complex combustor geometries. A co-located, finite-volume scheme on unstructured grids is used to solve the low-Mach number equations for gaseous phase. The liquid phase is modeled by tracking a large number of computational particles in a Lagrangian framework with models for inter-phase mass, momentum, and energy transport. Complex physical phenomena of liquid atomization, droplet deformation, drag, and evaporation are captured using advanced subgrid models. A flamelet/progress variable appraoch by Pierce & Moin (2001) is used to compute non-premixed turbulent combustion. A series of validation studies in coaxial and realistic gas-turbine combustor geometries are performed to test the predictive capability of the solver. Specifically, simulations of non-premixed combustion, particle-laden swirling flows, droplet vaporization in coaxial-jet combustors and spray breakup in realistic injectors are performed and good agreement with avialable experimental data is obtained. This tool is now being used to perform simulations of turbulent spray flames in a realistic Pratt & Whitney gas-turbine combustion chamber using Department of Energy's computational resources under the Accelerated Strategic Computing Initiative (ASCI) project.

  3. Evaluation of the pressure-volume-temperature (PVT) data of water from experiments and molecular simulations since 1990

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Hu, Jiawen; Mao, Shide; Zhang, Zhigang

    2015-08-01

    Since 1990, many groups of pressure-volume-temperature (PVT) data from experiments and molecular dynamics (MD) or Monte Carlo (MC) simulations have been reported for supercritical and subcritical water. In this work, fifteen groups of PVT data (253.15-4356 K and 0-90.5 GPa) are evaluated in detail with the aid of the highly accurate IAPWS-95 formulation. The evaluation gives the following results: (1) Six datasets are found to be of good accuracy. They include the simulated results based on SPCE potential above 100 MPa and those derived from sound velocity measurements, but the simulated results below 100 MPa have large uncertainties. (2) The data from measurements with a piston cylinder apparatus and simulations with an exp-6 potential contain large uncertainties and systematic deviations. (3) The other seven datasets show obvious systematic deviations. They include those from experiments with synthesized fluid inclusion techniques (three groups), measured velocities of sound (one group), and automated high-pressure dilatometer (one group) and simulations with TIP4P potential (two groups), where the simulated data based on TIP4P potential below 200 MPa have large uncertainties. (4) The simulated data but those below 1 GPa agree with each other within 2-3%, and mostly within 2%. The data from fluid inclusions show similar systematic deviations, which are less than 2-5%. The data obtained with automated high-pressure dilatometer and those derived from sound velocity measurements agree with each other within 0.3-0.6% in most cases, except for those above 10 GPa. In principle, the systematic deviations mentioned above, except for those of the simulated data below 1 GPa, can be largely eliminated or significantly reduced by appropriate corrections, and then the accuracy of the relevant data can be improved significantly. These are very important for the improvement of experiments or simulations and the refinement and correct use of the PVT data in developing

  4. Prospects of the search for neutrino bursts from supernovae with Baksan large volume scintillation detector

    NASA Astrophysics Data System (ADS)

    Petkov, V. B.

    2016-11-01

    Observing a high-statistics neutrino signal from the supernova explosions in the Galaxy is a major goal of low-energy neutrino astronomy. The prospects for detecting all flavors of neutrinos and antineutrinos from the core-collapse supernova (ccSN) in operating and forthcoming large liquid scintillation detectors (LLSD) are widely discussed now. One of proposed LLSD is Baksan Large Volume Scintillation Detector (BLVSD). This detector will be installed at the Baksan Neutrino Observatory (BNO) of the Institute for Nuclear Research, Russian Academy of Sciences, at a depth of 4800 m.w.e. Low-energy neutrino astronomy is one of the main lines of research of the BLVSD.

  5. Conference on physics from large gamma-ray detec tor arrays. Volume 2: Proceedings

    NASA Astrophysics Data System (ADS)

    The conference on 'Physics from Large gamma-ray Detector Arrays' is a continuation of the series of conferences that have been organized every two years by the North American Heavy-ion Laboratories. The aim of the conference this year was to encourage discussion of the physics that can be studied with such large arrays. This volume is the collected proceedings from this conference. It discusses properties of nuclear states which can be created in heavy-ion reactions, and which can be observed via such detector systems.

  6. Large eddy simulation subgrid model for soot prediction

    NASA Astrophysics Data System (ADS)

    El-Asrag, Hossam Abd El-Raouf Mostafa

    Soot prediction in realistic systems is one of the most challenging problems in theoretical and applied combustion. Soot formation as a chemical process is very complicated and not fully understood. The major difficulty stems from the chemical complexity of the soot formation process as well as its strong coupling with the other thermochemical and fluid processes that occur simultaneously. Soot is a major byproduct of incomplete combustion, having a strong impact on the environment as well as the combustion efficiency. Therefore, innovative methods is needed to predict soot in realistic configurations in an accurate and yet computationally efficient way. In the current study, a new soot formation subgrid model is developed and reported here. The new model is designed to be used within the context of the Large Eddy Simulation (LES) framework, combined with Linear Eddy Mixing (LEM) as a subgrid combustion model. The final model can be applied equally to premixed and non-premixed flames over any required geometry and flow conditions in the free, the transition, and the continuum regimes. The soot dynamics is predicted using a Method of Moments approach with Lagrangian Interpolative Closure (MOMIC) for the fractional moments. Since no prior knowledge of the particles distribution is required, the model is generally applicable. The current model accounts for the basic soot transport phenomena as transport by molecular diffusion and Thermophoretic forces. The model is first validated against experimental results for non-sooting swirling non-premixed and partially premixed flames. Next, a set of canonical premixed sooting flames are simulated, where the effect of turbulence, binary diffusivity and C/O ratio on soot formation are studied. Finally, the model is validated against a non-premixed jet sooting flame. The effect of the flame structure on the different soot formation stages as well as the particle size distribution is described. Good results are predicted with

  7. Film cooling from inclined cylindrical holes using large eddy simulations

    NASA Astrophysics Data System (ADS)

    Peet, Yulia V.

    2006-12-01

    The goal of the present study is to investigate numerically the physics of the flow, which occurs during the film cooling from inclined cylindrical holes, Film cooling is a technique used in gas turbine industry to reduce heat fluxes to the turbine blade surface. Large Eddy Simulation (LES) is performed modeling a realistic film cooling configuration, which consists of a large stagnation-type reservoir, feeding an array of discrete cooling holes (film holes) flowing into a flat plate turbulent boundary layer. Special computational methodology is developed for this problem, involving coupled simulations using multiple computational codes. A fully compressible LES code is used in the area above the flat plate, while a low Mach number LES code is employed in the plenum and film holes. The motivation for using different codes comes from the essential difference in the nature of the flow in these different regions. Flowfield is analyzed inside the plenum, film hole and a crossflow region. Flow inside the plenum is stagnating, except for the region close to the exit, where it accelerates rapidly to turn into the hole. The sharp radius of turning at the trailing edge of the plenum pipe connection causes the flow to separate from the downstream wall of the film hole. After coolant injection occurs, a complex flowfield is formed consisting of coherent vortical structures responsible for bringing hot crossflow fluid in contact with the walls of either the film hole or the blade, thus reducing cooling protection. Mean velocity and turbulent statistics are compared to experimental measurements, yielding good agreement for the mean flowfield and satisfactory agreement for the turbulence quantities. LES results are used to assess the applicability of basic assumptions of conventional eddy viscosity turbulence models used with Reynolds-averaged (RANS) approach, namely the isotropy of an eddy viscosity and thermal diffusivity. It is shown here that these assumptions do not hold

  8. Large Eddy Simulation of Vertical Axis Wind Turbines

    NASA Astrophysics Data System (ADS)

    Hezaveh, Seyed Hossein

    Due to several design advantages and operational characteristics, particularly in offshore farms, vertical axis wind turbines (VAWTs) are being reconsidered as a complementary technology to horizontal axial turbines (HAWTs). However, considerable gaps remain in our understanding of VAWT performance since they have been significantly less studied than HAWTs. This thesis examines the performance of isolated VAWTs based on different design parameters and evaluates their characteristics in large wind farms. An actuator line model (ALM) is implemented in an atmospheric boundary layer large eddy simulation (LES) code, with offline coupling to a high-resolution blade-scale unsteady Reynolds-averaged Navier-Stokes (URANS) model. The LES captures the turbine-to-farm scale dynamics, while the URANS captures the blade-to-turbine scale flow. The simulation results are found to be in good agreement with existing experimental datasets. Subsequently, a parametric study of the flow over an isolated VAWT is carried out by varying solidities, height-to-diameter aspect ratios, and tip speed ratios. The analyses of the wake area and power deficits yield an improved understanding of the evolution of VAWT wakes, which in turn enables a more informed selection of turbine designs for wind farms. One of the most important advantages of VAWTs compared to HAWTs is their potential synergistic interactions that increase their performance when placed in close proximity. Field experiments have confirmed that unlike HAWTs, VAWTs can enhance and increase the total power production when placed near each other. Based on these experiments and using ALM-LES, we also present and test new approaches for VAWT farm configuration. We first design clusters with three turbines then configure farms consisting of clusters of VAWTs rather than individual turbines. The results confirm that by using a cluster design, the average power density of wind farms can be increased by as much as 60% relative to regular

  9. Silt motion simulation using finite volume particle method

    NASA Astrophysics Data System (ADS)

    Jahanbakhsh, E.; Vessaz, C.; Avellan, F.

    2014-03-01

    In this paper, we present a 3-D FVPM which features rectangular top-hat kernels. With this method, interaction vectors are computed exactly and efficiently. We introduce a new method to enforce the no-slip boundary condition. With this boundary enforcement, the interaction forces between fluid and wall are computed accurately. We employ the boundary force to predict the motion of rigid spherical silt particles inside the fluid. To validate the model, we simulate the 2-D sedimentation of a single particle in viscous fluid tank and compare results with benchmark data. The particle resolution is verified by convergence study. We also simulate the sedimentation of two particles exhibiting drafting, kissing and tumbling phenomena in 2-D and 3-D. We compare the results with other numerical solutions.

  10. Shuttle vehicle and mission simulation requirements report, volume 1

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1972-01-01

    The requirements for the space shuttle vehicle and mission simulation are developed to analyze the systems, mission, operations, and interfaces. The requirements are developed according to the following subject areas: (1) mission envelope, (2) orbit flight dynamics, (3) shuttle vehicle systems, (4) external interfaces, (5) crew procedures, (6) crew station, (7) visual cues, and (8) aural cues. Line drawings and diagrams of the space shuttle are included to explain the various systems and components.

  11. WEST-3 wind turbine simulator development. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    This report is a summary description of WEST-3, a new real-time wind turbine simulator developed by Paragon Pacific Inc. WEST-3 is an all digital, fully programmable, high performance parallel processing computer. Contained in the report are descriptions of the WEST-3 hardware and software. WEST-3 consists of a network of Computational Units (CUs) working in parallel. Each CU is a custom designed high speed digital processor operating independently of other CUs. The CU, which is the main building block of the system, is described in some detail. A major contributor to the high performance of the system is the use a unique method for transferring data among the CUs. The software aspects of WEST-3 covered in the report include the preparation of the simulation model (reformulation, scaling and normalization), and the use of the system software (Translator, Linker, Assembler and Loader). Also given is a description of the wind turbine simulation model used in WEST-3, and some sample results from a study conducted to validate the system. Finally, efforts currently underway to enhance the user friendliness of the system are outlined; these include the 32-bit floating point capability, and major improvements in system software.

  12. Large-volume paracentesis with indwelling peritoneal catheter and albumin infusion: a community hospital study

    PubMed Central

    Martin, Daniel K.; Walayat, Saqib; Jinma, Ren; Ahmed, Zohair; Ragunathan, Karthik; Dhillon, Sonu

    2016-01-01

    Background The management of ascites can be problematic. This is especially true in patients with diuretic refractory ascites who develop a tense abdomen. This often results in hypotension and decreased venous return with resulting renal failure. In this paper, we further examine the risks and benefits of utilizing an indwelling peritoneal catheter to remove large-volume ascites over a 72-h period while maintaining intravascular volume and preventing renal failure. Methods We retrospectively reviewed charts and identified 36 consecutive patients undergoing continuous large-volume paracentesis with an indwelling peritoneal catheter. At the time of drain placement, no patients had signs or laboratory parameters suggestive of spontaneous bacterial peritonitis. The patients underwent ascitic fluid removal through an indwelling peritoneal catheter and were supported with scheduled albumin throughout the duration. The catheter was used to remove up to 3 L every 8 h for a maximum of 72 h. Regular laboratory and ascitic fluid testing was performed. All patients had a clinical follow-up within 3 months after the drain placement. Results An average of 16.5 L was removed over the 72-h time frame of indwelling peritoneal catheter maintenance. The albumin infusion utilized correlated to 12 mg/L removed. The average creatinine trend improved in a statistically significant manner from 1.37 on the day of admission to 1.21 on the day of drain removal. No patients developed renal failure during the hospital course. There were no documented episodes of neutrocytic ascites or bacterial peritonitis throughout the study review. Conclusion Large-volume peritoneal drainage with an indwelling peritoneal catheter is safe and effective for patients with tense ascites. Concomitant albumin infusion allows for maintenance of renal function, and no increase in infectious complications was noted. PMID:27802853

  13. Cerebrospinal fluid volume replacement following large endoscopic anterior cranial base resection.

    PubMed

    Blount, Angela; Riley, Kristen; Cure, Joel; Woodworth, Bradford A

    2012-01-01

    Large endoscopic skull-base resections often result in extensive postoperative pneumocephalus secondary to copious evacuation of cerebrospinal fluid (CSF) during the procedures. Replacing CSF lost during craniotomy with saline is a common technique in neurosurgery, but is difficult after extensive transnasal resection of the anterior cranial base because direct transnasal CSF augmentation will escape until the skull base reconstruction is sealed. The present study evaluated the effectiveness of intraoperative CSF volume replacement via lumbar drains on improving postoperative outcomes. Ten large endoscopic anterior skull-base resections (>2.5 cm) were performed from 2008 to 2011. Sellar, parasellar, and transplanum resections were excluded. Etiologies included esthesioneuroblastoma (2), squamous cell carcinoma (2), intracranial dermoid (2), adenocarcinoma (1), adenoid cystic carcinoma (1), melanoma (1), and meningioma (1). Six patients were administered preservative-free normal saline via lumbar drain during skull-base reconstruction. Data collected included volume of postoperative pneumocephalus, intravenous pain medicine requirements 24 hours after surgery, and length of hospital stay. Volume of pneumocephalus (4.78 cm vs 12.8 cm(3) , p = 0.04) and length of hospital stay (2.17 days vs 8.5 days, p = 0.03) were significantly decreased in the normal saline volume replacement group. Average intravenous pain medication requirements were reduced in the first 24 hours postoperatively (8 mg morphine vs 14 mg morphine, p = 0.25), but did not reach statistical significance. Evacuation of intracranial air by transthecal administration of saline during reconstruction of large anterior cranial base defects was an effective technique to decrease postoperative pneumocephalus and length of hospital stay. Further evaluation is warranted. Copyright © 2012 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  14. Enhanced FIB-SEM systems for large-volume 3D imaging

    PubMed Central

    Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F

    2017-01-01

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 106 µm3. These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology. DOI: http://dx.doi.org/10.7554/eLife.25916.001 PMID:28500755

  15. Enhanced FIB-SEM systems for large-volume 3D imaging

    DOE PAGES

    Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan; ...

    2017-05-13

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronalmore » processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less

  16. The effect of fluid density and volume on the accuracy of test weighing in a simulated oral feeding situation.

    PubMed

    Dowling, Donna A; Madigan, Elizabeth; Siripul, Pulsuk

    2004-06-01

    For preterm infants and infants who have difficulty with oral feeding, excessive drooling during oral feedings can result in inaccurate assessment of intake. The drooled volume is typically estimated by visual and tactile assessment of the bib. Research, however, has demonstrated that visual assessment is inaccurate. The purpose of this study was to determine the accuracy of a scale that was used for the test weighing of milk that was drooled during a study of oral feeding for preterm infants. Additionally, the effect of weighing solutions with different densities on the accuracy of test weights was examined. Descriptive, comparative design. A simulated feeding situation was performed using 3 fluids (water, Enfamil(20), and Enfamil(24)) and 3 volume ranges (1 mL to 10 mL, 11 mL to 20 mL, and 21 mL to 30 mL). Data collection sessions were conducted for each of the 3 fluids using each range of volumes, for a total of 180 test weights. The research assistant performing the test weights was blinded to the preweight of the bib and the amount of fluid being applied to the bib. Differences between the actual volume applied to the bib and the volume estimated by the scale were very small, with 51% of the differences equaling 0 mL and 48% of the differences between -1 mL and 1 mL. There were significant differences in errors related to both the type of fluid (F = 25.7; df = 2; P < 0.001) and volume range (F = 12.7; df = 2; P < 0.001), as well as for the interaction between the 2 factors (F = 7.02; df = 4; P < 0.001). Water had significantly less mean error than either formula, and large volumes had significantly greater mean error than either small or medium volumes. Test weighing is an accurate method for measuring fluids of different densities and volumes in a simulation of drooling during oral feeding. The increased error with larger volumes of higher density solutions was not clinically significant. The study supports the need to consider both the accuracy of the scale

  17. A family of dynamic models for large-eddy simulation

    NASA Technical Reports Server (NTRS)

    Carati, D.; Jansen, K.; Lund, T.

    1995-01-01

    Since its first application, the dynamic procedure has been recognized as an effective means to compute rather than prescribe the unknown coefficients that appear in a subgrid-scale model for Large-Eddy Simulation (LES). The dynamic procedure is usually used to determine the nondimensional coefficient in the Smagorinsky (1963) model. In reality the procedure is quite general and it is not limited to the Smagorinsky model by any theoretical or practical constraints. The purpose of this note is to consider a generalized family of dynamic eddy viscosity models that do not necessarily rely on the local equilibrium assumption built into the Smagorinsky model. By invoking an inertial range assumption, it will be shown that the coefficients in the new models need not be nondimensional. This additional degree of freedom allows the use of models that are scaled on traditionally unknown quantities such as the dissipation rate. In certain cases, the dynamic models with dimensional coefficients are simpler to implement, and allow for a 30% reduction in the number of required filtering operations.

  18. On the Computation of Sound by Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Streett, Craig L.; Sarkar, Sutanu

    1997-01-01

    The effect of the small scales on the source term in Lighthill's acoustic analogy is investigated, with the objective of determining the accuracy of large-eddy simulations when applied to studies of flow-generated sound. The distribution of the turbulent quadrupole is predicted accurately, if models that take into account the trace of the SGS stresses are used. Its spatial distribution is also correct, indicating that the low-wave-number (or frequency) part of the sound spectrum can be predicted well by LES. Filtering, however, removes the small-scale fluctuations that contribute significantly to the higher derivatives in space and time of Lighthill's stress tensor T(sub ij). The rms fluctuations of the filtered derivatives are substantially lower than those of the unfiltered quantities. The small scales, however, are not strongly correlated, and are not expected to contribute significantly to the far-field sound; separate modeling of the subgrid-scale density fluctuations might, however, be required in some configurations.

  19. Large eddy simulations of blood dynamics in abdominal aortic aneurysms.

    PubMed

    Vergara, Christian; Le Van, Davide; Quadrio, Maurizio; Formaggia, Luca; Domanin, Maurizio

    2017-09-01

    We study the effects of transition to turbulence in abdominal aortic aneurysms (AAA). The presence of transitional effects in such districts is related to the heart pulsatility and the sudden change of diameter of the vessels, and has been recorded by means of clinical measures as well as of computational studies. Here we propose, for the first time, the use of a large eddy simulation (LES) model to accurately describe transition to turbulence in realistic scenarios of AAA obtained from radiological images. To this aim, we post-process the obtained numerical solutions to assess significant quantities, such as the ensemble-averaged velocity and wall shear stress, the standard deviation of the fluctuating velocity field, and vortical structures educed via the so-called Q-criterion. The results demonstrate the suitability of the considered LES model and show the presence of significant transitional effects around the impingement region during the mid-deceleration phase. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Dynamically stable implosions in a large simulation dataset

    NASA Astrophysics Data System (ADS)

    Peterson, J. Luc; Field, John; Humbird, Kelli; Brandon, Scott; Langer, Steve; Nora, Ryan; Spears, Brian

    2016-10-01

    Asymmetric implosion drive can severely impact the performance of inertial confinement fusion capsules. In particular the time-varying radiation environment produced in near-vacuum hohlraum experiments at the National Ignition Facility is thought to limit the conversion efficiency of shell kinetic energy into hotspot internal energy. To investigate the role of dynamic asymmetries in implosion behavior we have created a large database of 2D capsule implosions of varying drive amplitude, drive asymmetry and capsule gas fill that spans 13 dimensions and consists of over 60,000 individual simulations. A novel in-transit analysis scheme allowed for the real-time processing of petabytes of raw data into hundreds of terabytes of physical metrics and synthetic images, and supervised learning algorithms identified regions of parameter space that robustly produce high yield. We will discuss the first results from this dataset and explore the dynamics of implosions that produce significant yield under asymmetric drives. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC. LLNL-ABS-697262.

  1. Saturn: A large area x-ray simulation accelerator

    SciTech Connect

    Bloomquist, D.D.; Stinnett, R.W.; McDaniel, D.H.; Lee, J.R.; Sharpe, A.W.; Halbleib, J.A.; Schlitt, L.G.; Spence, P.W.; Corcoran, P.

    1987-01-01

    Saturn is the result of a major metamorphosis of the Particle Beam Fusion Accelerator-I (PBFA-I) from an ICF research facility to the large-area x-ray source of the Simulation Technology Laboratory (STL) project. Renamed Saturn, for its unique multiple-ring diode design, the facility is designed to take advantage of the numerous advances in pulsed power technology made by the ICF program in recent years and much of the existing PBFA-I support system. Saturn will include significant upgrades in the energy storage and pulse-forming sections. The 36 magnetically insulated transmission lines (MITLs) that provided power flow to the ion diode of PBFA-I were replaced by a system of vertical triplate water transmission lines. These lines are connected to three horizontal triplate disks in a water convolute section. Power will flow through an insulator stack into radial MITLs that drive the three-ring diode. Saturn is designed to operate with a maximum of 750 kJ coupled to the three-ring e-beam diode with a peak power of 25 TW to provide an x-ray exposure capability of 5 x 10/sup 12/ rads/s (Si) and 5 cal/g (Au) over 500 cm/sup 2/.

  2. Final Report: "Large-Eddy Simulation of Anisotropic MHD Turbulence"

    SciTech Connect

    Zikanov, Oleg

    2008-06-23

    To acquire better understanding of turbulence in flows of liquid metals and other electrically conducting fluids in the presence of steady magnetic fields and to develop an accurate and physically adequate LES (large-eddy simulation) model for such flows. The scientific objectives formulated in the project proposal have been fully completed. Several new directions were initiated and advanced in the course of work. Particular achievements include a detailed study of transformation of turbulence caused by the imposed magnetic field, development of an LES model that accurately reproduces this transformation, and solution of several fundamental questions of the interaction between the magnetic field and fluid flows. Eight papers have been published in respected peer-reviewed journals, with two more papers currently undergoing review, and one in preparation for submission. A post-doctoral researcher and a graduate student have been trained in the areas of MHD, turbulence research, and computational methods. Close collaboration ties have been established with the MHD research centers in Germany and Belgium.

  3. Unphysical scalar excursions in large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Matheou, Georgios; Dimotakis, Paul

    2016-11-01

    The range of physically realizable values of passive scalar fields in any flow is bounded by their boundary values. The current investigation focuses on the local conservation of passive scalar concentration fields in turbulent flows and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a turbulent shear flow and examines methods for error diagnosis. Typically, scalar-excursion errors are diagnosed as violations of global boundedness, i.e., detecting scalar-concentration values outside boundary/initial condition bounds. To quantify errors in mixed-fluid regions, a local scalar excursion error metric is defined with respect to the local non-diffusive limit. Analysis of such errors shows that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. Local scalar excursion errors are found not to be correlated with the local scalar-gradient magnitude. This work is supported by AFOSR, DOE, and Caltech.

  4. Large Eddy Simulation of Turbulent Flow in a Ribbed Pipe

    NASA Astrophysics Data System (ADS)

    Kang, Changwoo; Yang, Kyung-Soo

    2011-11-01

    Turbulent flow in a pipe with periodically wall-mounted ribs has been investigated by large eddy simulation with a dynamic subgrid-scale model. The value of Re considered is 98,000, based on hydraulic diameter and mean bulk velocity. An immersed boundary method was employed to implement the ribs in the computational domain. The spacing of the ribs is the key parameter to produce the d-type, intermediate and k-type roughness flows. The mean velocity profiles and turbulent intensities obtained from the present LES are in good agreement with the experimental measurements currently available. Turbulence statistics, including budgets of the Reynolds stresses, were computed, and analyzed to elucidate turbulence structures, especially around the ribs. In particular, effects of the ribs are identified by comparing the turbulence structures with those of smooth pipe flow. The present investigation is relevant to the erosion/corrosion that often occurs around a protruding roughness in a pipe system. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0008457).

  5. Simulation of fatigue crack growth under large scale yielding conditions

    NASA Astrophysics Data System (ADS)

    Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann

    2010-07-01

    A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.

  6. Large-Scale Atomistic Simulations of Material Failure

    DOE Data Explorer

    Abraham, Farid [IBM Almaden Research; Duchaineau, Mark [LLNL; Wirth, Brian [LLNL; Heidelberg,; Seager, Mark [LLNL; De La Rubia, Diaz [LLNL

    These simulations from 2000 examine the supersonic propagation of cracks and the formation of complex junction structures in metals. Eight simulations concerning brittle fracture, ductile failure, and shockless compression are available.

  7. Enabling parallel simulation of large-scale HPC network systems

    DOE PAGES

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...

    2016-04-07

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  8. Enabling parallel simulation of large-scale HPC network systems

    SciTech Connect

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-04-07

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  9. Shuttle mission simulator. Volume 2: Requirement report, volume 2, revision C

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1973-01-01

    The requirements for space shuttle simulation which are discussed include: general requirements, program management, system engineering, design and development, crew stations, on-board computers, and systems integration. For Vol. 1, revision A see N73-22203, for Vol 2, revision A see N73-22204.

  10. Gain characteristics of large volume CuBr laser active media

    NASA Astrophysics Data System (ADS)

    Gubarev, F. A.; Troitskiy, V. O.; Trigub, M. V.; Sukhanov, V. B.

    2011-05-01

    The paper presents the experimental results on how the active additive HBr and the temperatures of the containers with CuBr influence the gain characteristics of large volume (8 cm bore, 90 cm long) CuBr laser active media with the external heating of the active zone of the gas discharge tube (GDT). It has been demonstrated that an increase in the concentration of CuBr vapors results in the contraction of the gain profile of the active medium, consistent with the increase of the gain factor in the axial region of GDT. The contraction is also imposed by HBr addition. Despite the fact that we used the external heating of GDT at the pump power of 1.5 kW and less, the energy input is still not sufficient to produce the effective generation for large active volume lasers; and it is evident from the small gain profile width. The maximum gain profile width under experimental conditions (consider Pout/ Pin > 2) was 3 cm; this value was obtained without HBr-additive within the active volume, while the concentration of CuBr vapors being significantly less than optimal, that corresponds to the maximum average lasing power.

  11. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    SciTech Connect

    Smith, William S.; Bull, Jeffrey S.; Wilcox, Trevor; Bos, Randall J.; Shao, Xuan-Min; Goorley, John T.; Costigan, Keeley R.

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.

  12. Numerical aerodynamic simulation facility preliminary study, volume 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A technology forecast was established for the 1980-1985 time frame and the appropriateness of various logic and memory technologies for the design of the numerical aerodynamic simulation facility was assessed. Flow models and their characteristics were analyzed and matched against candidate processor architecture. Metrics were established for the total facility, and housing and support requirements of the facility were identified. An overview of the system is presented, with emphasis on the hardware of the Navier-Stokes solver, which is the key element of the system. Software elements of the system are also discussed.

  13. A pyramid-based approach to visual exploration of a large volume of vehicle trajectory data

    NASA Astrophysics Data System (ADS)

    Sun, Jing; Li, Xiang

    2012-12-01

    Advances in positioning and wireless communicating technologies make it possible to collect large volumes of trajectory data of moving vehicles in a fast and convenient fashion. These data can be applied to traffic studies. Behind this application, a methodological issue that still requires particular attention is the way these data should be spatially visualized. Trajectory data physically consists of a large number of positioning points. With the dramatic increase of data volume, it becomes a challenge to display and explore these data. Existing commercial software often employs vector-based indexing structures to facilitate the display of a large volume of points, but their performance downgrades quickly when the number of points is very large, for example, tens of millions. In this paper, a pyramid-based approach is proposed. A pyramid method initially is invented to facilitate the display of raster images through the tradeoff between storage space and display time. A pyramid is a set of images at different levels with different resolutions. In this paper, we convert vector-based point data into raster data, and build a gridbased indexing structure in a 2D plane. Then, an image pyramid is built. Moreover, at the same level of a pyramid, image is segmented into mosaics with respect to the requirements of data storage and management. Algorithms or procedures on grid-based indexing structure, image pyramid, image segmentation, and visualization operations are given in this paper. A case study with taxi trajectory data in Shanghai is conducted. Results demonstrate that the proposed method outperforms the existing commercial software.

  14. Earth resources mission performance studies. Volume 2: Simulation results

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Simulations were made at three month intervals to investigate the EOS mission performance over the four seasons of the year. The basic objectives of the study were: (1) to evaluate the ability of an EOS type system to meet a representative set of specific collection requirements, and (2) to understand the capabilities and limitations of the EOS that influence the system's ability to satisfy certain collection objectives. Although the results were obtained from a consideration of a two sensor EOS system, the analysis can be applied to any remote sensing system having similar optical and operational characteristics. While the category related results are applicable only to the specified requirement configuration, the results relating to general capability and limitations of the sensors can be applied in extrapolating to other U.S. based EOS collection requirements. The TRW general purpose mission simulator and analytic techniques discussed in this report can be applied to a wide range of collection and planning problems of earth orbiting imaging systems.

  15. Volume-staged radiosurgery for large arteriovenous malformations: an evolving paradigm.

    PubMed

    Seymour, Zachary A; Sneed, Penny K; Gupta, Nalin; Lawton, Michael T; Molinaro, Annette M; Young, William; Dowd, Christopher F; Halbach, Van V; Higashida, Randall T; McDermott, Michael W

    2016-01-01

    OBJECT Large arteriovenous malformations (AVMs) remain difficult to treat, and ideal treatment parameters for volume-staged stereotactic radiosurgery (VS-SRS) are still unknown. The object of this study was to compare VS-SRS treatment outcomes for AVMs larger than 10 ml during 2 eras; Era 1 was 1992-March 2004, and Era 2 was May 2004-2008. In Era 2 the authors prospectively decreased the AVM treatment volume, increased the radiation dose per stage, and shortened the interval between stages. METHODS All cases of VS-SRS treatment for AVM performed at a single institution were retrospectively reviewed. RESULTS Of 69 patients intended for VS-SRS, 63 completed all stages. The median patient age at the first stage of VS-SRS was 34 years (range 9-68 years). The median modified radiosurgery-based AVM score (mRBAS), total AVM volume, and volume per stage in Era 1 versus Era 2 were 3.6 versus 2.7, 27.3 ml versus 18.9 ml, and 15.0 ml versus 6.8 ml, respectively. The median radiation dose per stage was 15.5 Gy in Era 1 and 17.0 Gy in Era 2, and the median clinical follow-up period in living patients was 8.6 years in Era 1 and 4.8 years in Era 2. All outcomes were measured from the first stage of VS-SRS. Near or complete obliteration was more common in Era 2 (log-rank test, p = 0.0003), with 3- and 5-year probabilities of 5% and 21%, respectively, in Era 1 compared with 24% and 68% in Era 2. Radiosurgical dose, AVM volume per stage, total AVM volume, era, compact nidus, Spetzler-Martin grade, and mRBAS were significantly associated with near or complete obliteration on univariate analysis. Dose was a strong predictor of response (Cox proportional hazards, p < 0.001, HR 6.99), with 3- and 5-year probabilities of near or complete obliteration of 5% and 16%, respectively, at a dose < 17 Gy versus 23% and 74% at a dose ≥ 17 Gy. Dose per stage, compact nidus, and total AVM volume remained significant predictors of near or complete obliteration on multivariate analysis. Seventeen

  16. Finite volume simulation for convective heat transfer in wavy channels

    NASA Astrophysics Data System (ADS)

    Aslan, Erman; Taymaz, Imdat; Islamoglu, Yasar

    2016-03-01

    The convective heat transfer characteristics for a periodic wavy channel have been investigated experimentally and numerically. Finite volume method was used in numerical study. Experiment results are used for validation the numerical results. Studies were conducted for air flow conditions where contact angle is 30°, and uniform heat flux 616 W/m2 is applied as the thermal boundary conditions. Reynolds number ( Re) is varied from 2000 to 11,000 and Prandtl number ( Pr) is taken 0.7. Nusselt number ( Nu), Colburn factor ( j), friction factor ( f) and goodness factor ( j/ f) against Reynolds number have been studied. The effects of the wave geometry and minimum channel height have been discussed. Thus, the best performance of flow and heat transfer characterization was determined through wavy channels. Additionally, it was determined that the computed values of convective heat transfer coefficients are in good correlation with experimental results for the converging diverging channel. Therefore, numerical results can be used for these channel geometries instead of experimental results.

  17. [Regulation of blood volume during weightlessness simulation of long duration].

    PubMed

    Custaud, Marc-Antoine; Belin de Chantemèle, Eric; Blanc, Stéphane; Gauquelin-Koch, Guillemette; Gharib, Claude

    2005-12-01

    To study the effects of microgravity on the mechanisms involved in the regulation of body hydrous status, total body water (TBW), plasma volume (PV), and its main regulating hormones (plasma renin, aldosterone, atrial natriuretic peptide (ANP), anti-diuretic hormone (ADH)) were determined, by isotopic dilution, Dill and Costill's formula, and radio-immunologic dosages, in 9 male subjects submitted to a 90-d head-down bed rest (HDBR). ADH was determined in 24 h urinary collection as well as osmolality, sodium, and potassium. Body mass decreased (-2.8 +/- 0.8 kg) as well as TBW(-7.2% +/- 0.9%, i.e., -2.6 +/- 0.7 kg) and PV (-4.7% +/- 1.8%). Renin and aldosterone were enhanced (+109.0% +/- 15.4% and +87.2% +/- 38.9%, respectively). Simultaneously, we observed a decrease in ANP (-33.2% +/- 20.4%). Other variables, including ADH, were not affected by HDBR. Body mass and TBW decrease (and consequently lean body mass) are associated with muscle atrophy. Renin, aldostrerone, and ANP modifications are well explained by the decrease in PV, which was not enough to induce ADH changes. It suggests that in man, the main regulatory factor for ADH secretion is osmolality, when PV is modestly and progressively decreased without arterial pressure modification, which was the case in the present protocol.

  18. The complex aerodynamic footprint of desert locusts revealed by large-volume tomographic particle image velocimetry

    PubMed Central

    Henningsson, Per; Michaelis, Dirk; Nakata, Toshiyuki; Schanz, Daniel; Geisler, Reinhard; Schröder, Andreas; Bomphrey, Richard J.

    2015-01-01

    Particle image velocimetry has been the preferred experimental technique with which to study the aerodynamics of animal flight for over a decade. In that time, hardware has become more accessible and the software has progressed from the acquisition of planes through the flow field to the reconstruction of small volumetric measurements. Until now, it has not been possible to capture large volumes that incorporate the full wavelength of the aerodynamic track left behind during a complete wingbeat cycle. Here, we use a unique apparatus to acquire the first instantaneous wake volume of a flying animal's entire wingbeat. We confirm the presence of wake deformation behind desert locusts and quantify the effect of that deformation on estimates of aerodynamic force and the efficiency of lift generation. We present previously undescribed vortex wake phenomena, including entrainment around the wing-tip vortices of a set of secondary vortices borne of Kelvin–Helmholtz instability in the shear layer behind the flapping wings. PMID:26040598

  19. The use of digital volume tomography in imaging an unusually large composite odontoma in the mandible.

    PubMed

    Bhatavadekar, Neel B; Bouquot, Jerry E

    2009-01-01

    The odontoma is the most common of all odontogenic tumors. Digital volume tomography (DVT) provides a major advantage of decreased radiation and cost-effectiveness, as compared to a conventional computed tomography. There is no known published report utilizing this DVT analysis for assessing and localizing on odontomo. The purpose of this case report was to document the use of digital volume tomography to assess an unusually large composite odontoma in the mondible. Tomographic sections revealed expansion of the buccol cortex and occasional thinning of both the buccol and lingual cortical plates, although there was no pronounced clinically detectable cortical expansion. The sections further demonstrated enomel ond dentin in on irregular mass bearing no morphologic similority to rudimentary teeth. This case highlights the importance of early diagnosis and intervention for treating on odontoma while demonstrating the value of tomographic imaging as on aid to diagnosis.

  20. Non-contact spectroscopic determination of large blood volume fractions in turbid media

    PubMed Central

    Bremmer, Rolf H.; Kanick, Stephen C.; Laan, Nick; Amelink, Arjen; van Leeuwen, Ton G.; Aalders, Maurice C. G.

    2011-01-01

    We report on a non-contact method to quantitatively determine blood volume fractions in turbid media by reflectance spectroscopy in the VIS/NIR spectral wavelength range. This method will be used for spectral analysis of tissue with large absorption coefficients and assist in age determination of bruises and bloodstains. First, a phantom set was constructed to determine the effective photon path length as a function of μa and μs′ on phantoms with an albedo range: 0.02-0.99. Based on these measurements, an empirical model of the path length was established for phantoms with an albedo > 0.1. Next, this model was validated on whole blood mimicking phantoms, to determine the blood volume fractions ρ = 0.12-0.84 within the phantoms (r = 0.993; error < 10%). Finally, the model was proved applicable on cotton fabric phantoms. PMID:21339884

  1. The complex aerodynamic footprint of desert locusts revealed by large-volume tomographic particle image velocimetry.

    PubMed

    Henningsson, Per; Michaelis, Dirk; Nakata, Toshiyuki; Schanz, Daniel; Geisler, Reinhard; Schröder, Andreas; Bomphrey, Richard J

    2015-07-06

    Particle image velocimetry has been the preferred experimental technique with which to study the aerodynamics of animal flight for over a decade. In that time, hardware has become more accessible and the software has progressed from the acquisition of planes through the flow field to the reconstruction of small volumetric measurements. Until now, it has not been possible to capture large volumes that incorporate the full wavelength of the aerodynamic track left behind during a complete wingbeat cycle. Here, we use a unique apparatus to acquire the first instantaneous wake volume of a flying animal's entire wingbeat. We confirm the presence of wake deformation behind desert locusts and quantify the effect of that deformation on estimates of aerodynamic force and the efficiency of lift generation. We present previously undescribed vortex wake phenomena, including entrainment around the wing-tip vortices of a set of secondary vortices borne of Kelvin-Helmholtz instability in the shear layer behind the flapping wings.

  2. Mechanically Cooled Large-Volume Germanium Detector Systems for Neclear Explosion Monitoring DOENA27323-2

    SciTech Connect

    Hull, E.L.

    2006-10-30

    Compact maintenance free mechanical cooling systems are being developed to operate large volume high-resolution gamma-ray detectors for field applications. To accomplish this we are utilizing a newly available generation of Stirling-cycle mechanical coolers to operate the very largest volume germanium detectors with no maintenance. The user will be able to leave these systems unplugged on the shelf until needed. The maintenance-free operating lifetime of these detector systems will exceed 5 years. Three important factors affect the operation of mechanically cooled germanium detectors: temperature, vacuum, and vibration. These factors will be studied in the laboratory at the most fundamental levels to insure a solid understanding of the physical limitations each factor places on a practical mechanically cooled germanium detector system. Using this knowledge, mechanically cooled germanium detector prototype systems will be designed and fabricated.

  3. A volume law for specification of linear channel storage for estimation of large floods

    NASA Astrophysics Data System (ADS)

    Zhang, Shangyou; Cordery, Ian; Sharma, Ashish

    2000-02-01

    A method of estimating large floods using a linear storage-routing approach is presented. The differences between the proposed approach and those traditionally used are (1) that the flood producing properties of basins are represented by a linear system, (2) the storage parameters of the distributed model are determined using a volume law which, unlike other storage-routing models, accounts for the distribution of storage in natural basins, and (3) the basin outflow hydrograph is determined analytically and expressed in a succinct mathematical form. The single model parameter is estimated from observed data without direct fitting, unlike most traditionally used methods. The model was tested by showing it could reproduce observed large floods on a number of basins. This paper compares the proposed approach with a traditionally used storage routing approach using observed flood data from the Hacking River basin in New South Wales, Australia. Results confirm the usefulness of the proposed approach for estimation of large floods.

  4. Very Large Area/Volume Microwave ECR Plasma and Ion Source

    NASA Technical Reports Server (NTRS)

    Foster, John E. (Inventor); Patterson, Michael J. (Inventor)

    2009-01-01

    The present invention is an apparatus and method for producing very large area and large volume plasmas. The invention utilizes electron cyclotron resonances in conjunction with permanent magnets to produce dense, uniform plasmas for long life ion thruster applications or for plasma processing applications such as etching, deposition, ion milling and ion implantation. The large area source is at least five times larger than the 12-inch wafers being processed to date. Its rectangular shape makes it easier to accommodate to materials processing than sources that are circular in shape. The source itself represents the largest ECR ion source built to date. It is electrodeless and does not utilize electromagnets to generate the ECR magnetic circuit, nor does it make use of windows.

  5. Plastic embedding immunolabeled large-volume samples for three-dimensional high-resolution imaging.

    PubMed

    Gang, Yadong; Liu, Xiuli; Wang, Xiaojun; Zhang, Qi; Zhou, Hongfu; Chen, Ruixi; Liu, Ling; Jia, Yao; Yin, Fangfang; Rao, Gong; Chen, Jiadong; Zeng, Shaoqun

    2017-08-01

    High-resolution three-dimensional biomolecule distribution information of large samples is essential to understanding their biological structure and function. Here, we proposed a method combining large sample resin embedding with iDISCO immunofluorescence staining to acquire the profile of biomolecules with high spatial resolution. We evaluated the compatibility of plastic embedding with an iDISCO staining technique and found that the fluorophores and the neuronal fine structures could be well preserved in the Lowicryl HM20 resin, and that numerous antibodies and fluorescent tracers worked well upon Lowicryl HM20 resin embedding. Further, using fluorescence Micro-Optical sectioning tomography (fMOST) technology combined with ultra-thin slicing and imaging, we were able to image the immunolabeled large-volume tissues with high resolution.

  6. Plastic embedding immunolabeled large-volume samples for three-dimensional high-resolution imaging

    PubMed Central

    Gang, Yadong; Liu, Xiuli; Wang, Xiaojun; Zhang, Qi; Zhou, Hongfu; Chen, Ruixi; Liu, Ling; Jia, Yao; Yin, Fangfang; Rao, Gong; Chen, Jiadong; Zeng, Shaoqun

    2017-01-01

    High-resolution three-dimensional biomolecule distribution information of large samples is essential to understanding their biological structure and function. Here, we proposed a method combining large sample resin embedding with iDISCO immunofluorescence staining to acquire the profile of biomolecules with high spatial resolution. We evaluated the compatibility of plastic embedding with an iDISCO staining technique and found that the fluorophores and the neuronal fine structures could be well preserved in the Lowicryl HM20 resin, and that numerous antibodies and fluorescent tracers worked well upon Lowicryl HM20 resin embedding. Further, using fluorescence Micro-Optical sectioning tomography (fMOST) technology combined with ultra-thin slicing and imaging, we were able to image the immunolabeled large-volume tissues with high resolution. PMID:28856037

  7. Improved engine wall models for Large Eddy Simulation (LES)

    NASA Astrophysics Data System (ADS)

    Plengsaard, Chalearmpol

    Improved wall models for Large Eddy Simulation (LES) are presented in this research. The classical Werner-Wengle (WW) wall shear stress model is used along with near-wall sub-grid scale viscosity. A sub-grid scale turbulent kinetic energy is employed in a model for the eddy viscosity. To gain better heat flux results, a modified classical variable-density wall heat transfer model is also used. Because no experimental wall shear stress results are available in engines, the fully turbulent developed flow in a square duct is chosen to validate the new wall models. The model constants in the new wall models are set to 0.01 and 0.8, respectively and are kept constant throughout the investigation. The resulting time- and spatially-averaged velocity and temperature wall functions from the new wall models match well with the law-of-the-wall experimental data at Re = 50,000. In order to study the effect of hot air impinging walls, jet impingement on a flat plate is also tested with the new wall models. The jet Reynolds number is equal to 21,000 and a fixed jet-to-plate spacing of H/D = 2.0. As predicted by the new wall models, the time-averaged skin friction coefficient agrees well with experimental data, while the computed Nusselt number agrees fairly well when r/D > 2.0. Additionally, the model is validated using experimental data from a Caterpillar engine operated with conventional diesel combustion. Sixteen different operating engine conditions are simulated. The majority of the predicted heat flux results from each thermocouple location follow similar trends when compared with experimental data. The magnitude of peak heat fluxes as predicted by the new wall models is in the range of typical measured values in diesel combustion, while most heat flux results from previous LES wall models are over-predicted. The new wall models generate more accurate predictions and agree better with experimental data.

  8. New Specimen Access Device for the Large Space Simulator

    NASA Astrophysics Data System (ADS)

    Lazzarini, P.; Ratti, F.

    2004-08-01

    The Large Space Simulator (LSS) is used to simulate in- orbit environmental conditions for spacecraft (S/C) testing. The LSS is intended to be a flexible facility: it can accommodate test articles that can differ significantly in shape and weight and carry various instruments. To improve the accessibility to the S/C inside the LSS chamber a new Specimen Access Device (SAD) has been procured. The SAD provides immediate and easy access to the S/C, thus reducing the amount of time necessary for the installations of set-ups in the LSS. The SAD has been designed as bridge crane carrying a basket to move the operator into the LSS. Such a crane moves on parallel rails on the top floor of the LSS building. The SAD is composed by three subsystems: the main bridge, the trolley that moves along the main bridge and the telescopic mast. A trade off analysis has been carried out for what concerns the telescopic mast design. The choice between friction pads vs rollers, to couple the different sections of the mast, has been evaluated. The resulting design makes use of a four sections square mast, with rollers driven deployment. This design has been chosen for the higher stiffness of the mast, due to the limited number of sections, and because it reduces radically the risk of contamination related to a solution based on sliding bushings. Analyses have been performed to assess the mechanical behaviour both in static and in dynamic conditions. In particular the telescopic mast has been studied in detail to optimise its stiffness and to check the safety margins in the various operational conditions. To increase the safety of the operations an anticollision system has been implemented by positioning on the basket two kind of sensors, ultrasonic and contact ones. All the translations are regulated by inverters with acceleration and deceleration ramps controlled by a Programmable Logic Controller (PLC). An absolute encoder is installed on each motor to provide the actual position of the

  9. Large-eddy simulation of unidirectional turbulent flow over dunes

    NASA Astrophysics Data System (ADS)

    Omidyeganeh, Mohammad

    We performed large eddy simulation of the flow over a series of two- and three-dimensional dune geometries at laboratory scale using the Lagrangian dynamic eddy-viscosity subgrid-scale model. First, we studied the flow over a standard 2D transverse dune geometry, then bedform three-dimensionality was imposed. Finally, we investigated the turbulent flow over barchan dunes. The results are validated by comparison with simulations and experiments for the 2D dune case, while the results of the 3D dunes are validated qualitatively against experiments. The flow over transverse dunes separates at the dune crest, generating a shear layer that plays a crucial role in the transport of momentum and energy, as well as the generation of coherent structures. Spanwise vortices are generated in the separated shear; as they are advected, they undergo lateral instabilities and develop into horseshoe-like structures and finally reach the surface. The ejection that occurs between the legs of the vortex creates the upwelling and downdrafting events on the free surface known as "boils". The three-dimensional separation of flow at the crestline alters the distribution of wall pressure, which may cause secondary flow across the stream. The mean flow is characterized by a pair of counter-rotating streamwise vortices, with core radii of the order of the flow depth. Staggering the crestlines alters the secondary motion; two pairs of streamwise vortices appear (a strong one, centred about the lobe, and a weaker one, coming from the previous dune, centred around the saddle). The flow over barchan dunes presents significant differences to that over transverse dunes. The flow near the bed, upstream of the dune, diverges from the centerline plane; the flow close to the centerline plane separates at the crest and reattaches on the bed. Away from the centerline plane and along the horns, flow separation occurs intermittently. The flow in the separation bubble is routed towards the horns and leaves

  10. Characteristics of the mixing volume model with the interactions among spatially distributed particles for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2016-11-01

    The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.

  11. Two-field Kaehler moduli inflation in large volume moduli stabilization

    SciTech Connect

    Yang, Huan-Xiong; Ma, Hong-Liang E-mail: hlma@mail.ustc.edu.cn

    2008-08-15

    In this paper we present a two-field inflation model, which is distinctive in having a non-canonical kinetic Lagrangian and comes from the large volume approach to the moduli stabilization in flux compactification of type IIB superstring on a Calabi-Yau orientifold with h{sup (1,2)}>h{sup (1,1)}{>=}4. The Kaehler moduli are classified as the volume modulus, heavy moduli and two light moduli. The axion-dilaton, complex structure moduli and all heavy Kaehler moduli including the volume modulus are frozen by a non-perturbatively corrected flux superpotential and the {alpha}{sup '}-corrected Kaehler potential in the large volume limit. The minimum of the scalar potential at which the heavy moduli are stabilized provides the dominant potential energy for the surviving light Kaehler moduli. We consider a simplified case where the axionic components in the light Kaehler moduli are further stabilized at the potential minimum and only the geometrical components are taken as scalar fields to drive an assisted-like inflation. For a certain range of moduli stabilization parameters and inflation initial conditions, we obtain a nearly flat power spectrum of the curvature perturbation, with n{sub s} Almost-Equal-To 0.96 at Hubble exit, and an inflationary energy scale of 3 Multiplication-Sign 10{sup 14} GeV. In our model, there is significant correlation between the curvature and isocurvature perturbations on super-Hubble scales, so at the end of inflation a great deal of the curvature power spectrum originates from this correlation.

  12. The mechanism for large-volume fluid pumping via reversible snap-through of dielectric elastomer

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Wang, Yingxi; Foo, Choon Chiang; Godaba, Hareesh; Zhu, Jian; Yap, Choon Hwai

    2017-08-01

    Giant deformation of dielectric elastomers (DEs) via electromechanical instability (or the "snap-through" phenomenon) is a promising mechanism for large-volume fluid pumping. Snap-through of a DE membrane coupled with compressible air has been previously investigated. However, the physics behind reversible snap-through of a DE diaphragm coupled with incompressible fluid for the purpose of fluid pumping has not been well investigated, and the conditions required for reversible snap-through in a hydraulic system are unknown. In this study, we have proposed a concept for large-volume fluid pumping by harnessing reversible snap-through of the dielectric elastomer. The occurrence of snap-through was theoretically modeled and experimentally verified. Both the theoretical and experimental pressure-volume curves of the DE membrane under different actuation voltages were used to design the work loop of the pump, and the theoretical work loop agreed with the experimental work loop. Furthermore, the feasibility of reversible snap-through was experimentally verified, and specific conditions were found necessary for this to occur, such as a minimum actuation voltage, an optimal range of hydraulic pressure exerted on the DE membrane and a suitable actuation frequency. Under optimal working conditions, we demonstrated a pumping volume of up to 110 ml per cycle, which was significantly larger than that without snap-through. Furthermore, we have achieved fluid pumping from a region of low pressure to another region of high pressure. Findings of this study would be useful for real world applications such as the blood pump.

  13. Trace analysis of environmental matrices by large-volume injection and liquid chromatography-mass spectrometry.

    PubMed

    Busetti, Francesco; Backe, Will J; Bendixen, Nina; Maier, Urs; Place, Benjamin; Giger, Walter; Field, Jennifer A

    2012-01-01

    The time-honored convention of concentrating aqueous samples by solid-phase extraction (SPE) is being challenged by the increasingly widespread use of large-volume injection (LVI) liquid chromatography-mass spectrometry (LC-MS) for the determination of traces of polar organic contaminants in environmental samples. Although different LVI approaches have been proposed over the last 40 years, the simplest and most popular way of performing LVI is known as single-column LVI (SC-LVI), in which a large-volume of an aqueous sample is directly injected into an analytical column. For the purposes of this critical review, LVI is defined as an injected sample volume that is ≥10% of the void volume of the analytical column. Compared with other techniques, SC-LVI is easier to set up, because it requires only small hardware modifications to existing autosamplers and, thus, it will be the main focus of this review. Although not new, SC-LVI is gaining acceptance and the approach is emerging as a technique that will render SPE nearly obsolete for many environmental applications. In this review, we discuss: the history and development of various forms of LVI; the critical factors that must be considered when creating and optimizing SC-LVI methods; and typical applications that demonstrate the range of environmental matrices to which LVI is applicable, for example drinking water, groundwater, and surface water including seawater and wastewater. Furthermore, we indicate direction and areas that must be addressed to fully delineate the limits of SC-LVI.

  14. Characterization of large volume 3.5″×8″ LaBr3:Ce detectors

    NASA Astrophysics Data System (ADS)

    Giaz, A.; Pellegri, L.; Riboldi, S.; Camera, F.; Blasi, N.; Boiano, C.; Bracco, A.; Brambilla, S.; Ceruti, S.; Coelli, S.; Crespi, F. C. L.; Csatlòs, M.; Frega, S.; Gulyàs, J.; Krasznahorkay, A.; Lodetti, S.; Million, B.; Owens, A.; Quarati, F.; Stuhl, L.; Wieland, O.

    2013-11-01

    The properties of large volume cylindrical 3.5″×8″ (89 mm×203 mm) LaBr3:Ce scintillation detectors coupled to the Hamamatsu R10233-100SEL photo-multiplier tube were investigated. These crystals are among the largest ones ever produced and still need to be fully characterized to determine how these detectors can be utilized and in which applications. We tested the detectors using monochromatic γ-ray sources and in-beam reactions producing γ rays up to 22.6 MeV; we acquired PMT signal pulses and calculated detector energy resolution and response linearity as a function of γ-ray energy. Two different voltage dividers were coupled to the Hamamatsu R10233-100SEL PMT: the Hamamatsu E1198-26, based on straightforward resistive network design, and the “LABRVD”, specifically designed for our large volume LaBr3:Ce scintillation detectors, which also includes active semiconductor devices. Because of the extremely high light yield of LaBr3:Ce crystals we observed that, depending on the choice of PMT, voltage divider and applied voltage, some significant deviation from the ideally proportional response of the detector and some pulse shape deformation appear. In addition, crystal non-homogeneities and PMT gain drifts affect the (measured) energy resolution especially in case of high-energy γ rays. We also measured the time resolution of detectors with different sizes (from 1″×1″ up to 3.5″×8″), correlating the results with both the intrinsic properties of PMTs and GEANT simulations of the scintillation light collection process. The detector absolute full energy efficiency was measured and simulated up to γ-rays of 30 MeV

  15. Flow rates of large animal fluid delivery systems used for high-volume crystalloid resuscitation.

    PubMed

    Nolen-Walston, Rose D

    2012-12-01

    Large animal species in states of shock can require particularly high flow rates for volume resuscitation and the ability to deliver adequate volumes rapidly may be a rate-limiting step. The objective of this study was to determine the maximum flow rates of common combinations of IV catheter, extension set, and fluid administration sets. University veterinary teaching hospital. In vitro experimental study. Maximum flow rates were measured using combinations of 4 IV catheters (3 14-Ga and a single 10-Ga), 2 IV catheter extension sets (small bore and large bore), and 2 types of fluid administration sets (standard 2-lead large animal coiled IV set and nonpressurized 4-lead arthroscopic irrigation set). The catheter, extension sets, and administration sets were arranged in 16 configurations, and flow rates measured in triplicate using tap water flowing into an open receptacle. Flow rates ranged from 7.4 L/h with an over-the-wire 14-Ga catheter, small-bore extension, and coil set, to 51.2 L/h using a 10-Ga catheter, no extension, and arthroscopic irrigation set. There was an increase of 1.3-8.9% in flow rates between the large- versus small-bore extension sets. Crystalloid delivery in vivo to an adult horse was 21% slower (9.1 L/h versus 11.5 L/h) than the corresponding in vitro measurement. Extremely high flow rates can be achieved in vitro using large-bore catheters and delivery systems, although the clinical necessity for rates >50 L/h has not been determined. The use of large-bore extension sets resulted in only a minimal increase in flow rate. © Veterinary Emergency and Critical Care Society 2012.

  16. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.; Vidoni, T. J.

    1991-01-01

    The main objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. In the efforts related to LES, we were concerned with developing reliable subgrid closures for modeling of the fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we focused our attention to further investigation of the effects of exothermicity in compressible turbulent flows. In our previous work, in the first year of this research, we have considered only 'simple' flows. Currently, we are in the process of extending our analyses for the purpose of modeling more practical flows of current interest at LaRC. A summary of our accomplishments during the third six months of the research is presented.

  17. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The basic objective of this research is to extend the capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. In the efforts related to LES, we were primarily involved with assessing the performance of the various modern methods based on the Probability Density Function (PDF) methods for providing closures for treating the subgrid fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we concentrated on understanding some of the relevant physics of compressible reacting flows by means of statistical analysis of the data generated by DNS of such flows. In the research conducted in the second year of this program, our efforts focused on the modeling of homogeneous compressible turbulent flows by PDF methods, and on DNS of non-equilibrium reacting high speed mixing layers. Some preliminary work is also in progress on PDF modeling of shear flows, and also on LES of such flows.

  18. Analysis of the fast scanning method for tumor ablation with the effect of the large blood vessel by numerical simulation

    NASA Astrophysics Data System (ADS)

    Qiao, Shan; Shen, Guofeng; Bai, Jingfeng; Chen, Yazhu

    2012-11-01

    While using HIFU for tumor ablation, the focal size of the ultrasound is relatively small compared with the tumor, therefore, numerous sonications are necessary to cover the whole treatment area. A large number of foci, on the other hand, lead to a complex problem for the optimization of the treatment parameters. Moreover, the existence of the large vessel might reduce the size of the lesion volume. A fast scanning method for volumetric ablation is investigated by numerical simulation with the effect of the large blood vessel. The proposed method is only available for phased array transducers because fast switch at the frequency of 10 Hz between several predetermined focus positions is needed. Since the duration time for each single ablation was identical, ignoring the ultrasound power, the scan path is the major parameter that should be decided. Five scan paths are simulated with and without a large vessel (diameter of 6mm) in the compute domain. The simulations solved by finite element method showed that the size of formed lesions had little difference while different scan paths applied. While the proposed scan method was used, the impact of the blood flow on the lesion volume depended on the distance between the large vessel and the focal area, as same as previous researches in the single focus case. Additionally, the orientation of the vessel played an important role in the formation of lesions.

  19. Numerical grid generation in computational field simulations. Volume 1

    SciTech Connect

    Soni, B.K.; Thompson, J.F.; Haeuser, J.; Eiseman, P.R.

    1996-12-31

    To enhance the CFS technology to its next level of applicability (i.e., to create acceptance of CFS in an integrated product and process development involving multidisciplinary optimization) the basic requirements are: rapid turn-around time, reliable and accurate simulation, affordability and appropriate linkage to other engineering disciplines. In response to this demand, there has been a considerable growth in the grid generation related research activities involving automization, parallel processing, linkage with the CAD-CAM systems, CFS with dynamic motion and moving boundaries, strategies and algorithms associated with multi-block structured, unstructured, hybrid, hexahedral, and Cartesian grids, along with its applicability to various disciplines including biomedical, semiconductor, geophysical, ocean modeling, and multidisciplinary optimization.

  20. Hierarchical imaging: a new concept for targeted imaging of large volumes from cells to tissues.

    PubMed

    Wacker, Irene; Spomer, Waldemar; Hofmann, Andreas; Thaler, Marlene; Hillmer, Stefan; Gengenbach, Ulrich; Schröder, Rasmus R

    2016-12-12

    Imaging large volumes such as entire cells or small model organisms at nanoscale resolution seemed an unrealistic, rather tedious task so far. Now, technical advances have lead to several electron microscopy (EM) large volume imaging techniques. One is array tomography, where ribbons of ultrathin serial sections are deposited on solid substrates like silicon wafers or glass coverslips. To ensure reliable retrieval of multiple ribbons from the boat of a diamond knife we introduce a substrate holder with 7 axes of translation or rotation specifically designed for that purpose. With this device we are able to deposit hundreds of sections in an ordered way in an area of 22 × 22 mm, the size of a coverslip. Imaging such arrays in a standard wide field fluorescence microscope produces reconstructions with 200 nm lateral resolution and 100 nm (the section thickness) resolution in z. By hierarchical imaging cascades in the scanning electron microscope (SEM), using a new software platform, we can address volumes from single cells to complete organs. In our first example, a cell population isolated from zebrafish spleen, we characterize different cell types according to their organelle inventory by segmenting 3D reconstructions of complete cells imaged with nanoscale resolution. In addition, by screening large numbers of cells at decreased resolution we can define the percentage at which different cell types are present in our preparation. With the second example, the root tip of cress, we illustrate how combining information from intermediate resolution data with high resolution data from selected regions of interest can drastically reduce the amount of data that has to be recorded. By imaging only the interesting parts of a sample considerably less data need to be stored, handled and eventually analysed. Our custom-designed substrate holder allows reproducible generation of section libraries, which can then be imaged in a hierarchical way. We demonstrate, that EM

  1. Refinement of a mesoscale model for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Gasset, Nicolas

    With the advent of wind energy technology, several methods have become mature and are seen today as standard for predicting and forecasting the wind. However, their results are still site dependent, and the increasing sizes of both modern wind turbines and wind farms tackle limits of existing methods. Some triggered processes extend to the junction between microscales and mesoscales.The main objectives of this thesis are thus to identify, implement and evaluate an approach allowing for microscale and mesoscale ABL flow modelling considering the various challenges of modern wind energy applications. A literature review of ABL flow modelling from microscales to mesoscales first provides an overview of the specificities and abilities of existing methods. The combined mesoscale/large eddy simulation (LES) modelling appears to be the most promising approach, and the Compressible Community Mesoscale Model (MC2) is elected as the basis of the method in which the components required for LES are added and implemented. A detailed description of the mathematical model and the numerical aspects of the various components of the LES-capable MC2 are then presented so that a complete view of the proposed approach along with the specificities of its implementation are provided. This further allows to introduce the enhancements and new components of the method (separation of volumetric and deviatoric Reynolds tensor terms, vertical staggering, subgrid scale models, 3D turbulent diffusion, 3D turbulent kinetic energy equation), as well as the adaptation of its operating mode to allow for LES (initialization, large scale geostrophic forcing, surface and lateral boundaries). Finally, fundamental aspects and new components of the proposed approach are evaluated based on theoretical 1D Ekman boundary layer and 3D unsteady shear and buoyancy driven homogeneous surface full ABL cases. The model behaviour at high resolution as well as the components required for LES in MC2 are all finely

  2. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  3. Measurements and large eddy simulation of propagating premixed flames

    SciTech Connect

    Masri, A.R.; Cadwallader, B.J.; Ibrahim, S.S.

    2006-07-15

    This paper presents an experimental and numerical study of unsteady turbulent premixed flames igniting in an initially stagnant mixture and propagating past solid obstacles. The objective here is to study the outstanding issue of flow-flame interactions in transient premixed combustion environments. Particular emphasis is placed on the burning rate and the structure of the flame front. The experimental configuration consists of a chamber with a square cross-section filled with a combustible mixture of propane-air ignited from rest. An array of baffle plates as well as geometrical obstructions of varying shapes and blockage ratios, are placed in the path of the flame as it propagates from the ignition source to the vented end of the enclosure. A range of flame propagation conditions are studied experimentally. Measurements are presented for pressure-time traces, high-speed images of the flame front, mean velocities obtained from particle imaging velocimetry and laser induced fluorescence images of the hydroxyl radical OH. Three-dimensional large eddy simulations (LES) are also made for a case where a square obstacle and an array of baffle plates are placed in the chamber. The dynamic Germano model and a simple flamelet combustion model are used at the sub-grid scale. The effects of grid size and sub-grid filter width are also discussed. Calculations and measurements are found to be in good agreement with respect to flame structure and peak overpressure. Turbulence levels increase significantly at the leading edge of the flame as it propagates past the array of baffle plates and the obstacle. With reference to the regime diagrams for turbulent premixed combustion, it is noted that the flame continues to lie in the zones of thin reactions or corrugated flamelets regardless of the stage of propagation along the chamber. (author)

  4. A Novel Technique for Endovascular Removal of Large Volume Right Atrial Tumor Thrombus

    SciTech Connect

    Nickel, Barbara; McClure, Timothy Moriarty, John

    2015-08-15

    Venous thromboembolic disease is a significant cause of morbidity and mortality, particularly in the setting of large volume pulmonary embolism. Thrombolytic therapy has been shown to be a successful treatment modality; however, its use somewhat limited due to the risk of hemorrhage and potential for distal embolization in the setting of large mobile thrombi. In patients where either thrombolysis is contraindicated or unsuccessful, and conventional therapies prove inadequate, surgical thrombectomy may be considered. We present a case of percutaneous endovascular extraction of a large mobile mass extending from the inferior vena cava into the right atrium using the Angiovac device, a venovenous bypass system designed for high-volume aspiration of undesired endovascular material. Standard endovascular methods for removal of cancer-associated thrombus, such as catheter-directed lysis, maceration, and exclusion, may prove inadequate in the setting of underlying tumor thrombus. Where conventional endovascular methods either fail or are unsuitable, endovascular thrombectomy with the Angiovac device may be a useful and safe minimally invasive alternative to open resection.

  5. A Novel Technique for Endovascular Removal of Large Volume Right Atrial Tumor Thrombus.

    PubMed

    Nickel, Barbara; McClure, Timothy; Moriarty, John

    2015-08-01

    Venous thromboembolic disease is a significant cause of morbidity and mortality, particularly in the setting of large volume pulmonary embolism. Thrombolytic therapy has been shown to be a successful treatment modality; however, its use somewhat limited due to the risk of hemorrhage and potential for distal embolization in the setting of large mobile thrombi. In patients where either thrombolysis is contraindicated or unsuccessful, and conventional therapies prove inadequate, surgical thrombectomy may be considered. We present a case of percutaneous endovascular extraction of a large mobile mass extending from the inferior vena cava into the right atrium using the Angiovac device, a venovenous bypass system designed for high-volume aspiration of undesired endovascular material. Standard endovascular methods for removal of cancer-associated thrombus, such as catheter-directed lysis, maceration, and exclusion, may prove inadequate in the setting of underlying tumor thrombus. Where conventional endovascular methods either fail or are unsuitable, endovascular thrombectomy with the Angiovac device may be a useful and safe minimally invasive alternative to open resection.

  6. Large-volume liposuction: a review of 631 consecutive cases over 12 years.

    PubMed

    Commons, G W; Halperin, B; Chang, C C

    2001-11-01

    Since the advent of epinephrine-containing wetting solutions and sophisticated fluid management techniques, increasingly larger and larger volumes of liposuction aspirations have been reported. Unfortunately, with these larger volumes of liposuction being routinely performed, greater rates of complications have also been reported, with the worst of these resulting in deaths. In a response to the increasing concerns over the safety of large-volume liposuction, a critical review of the senior author's own series has been performed to evaluate risks and benefits and to recommend guidelines for safe and effective large-volume liposuction. A retrospective chart review was performed on 631 consecutive patients who underwent liposuction procedures of at least 3000 cc total aspirate. All procedures were performed by the same senior surgeon between January of 1986 and March of 1998. Before September of 1996, traditional liposuction techniques were used. After September of 1996, ultrasound-assisted liposuction was performed. The superwet technique of fluid management was employed for all procedures performed after 1991. The particulars of the surgical and anesthetic techniques used are reviewed in the article. Data collection included preoperative patient demographics, preoperative and postoperative weights and measurements, and preoperative and postoperative photographs. Total aspirate volumes, fluid intakes, and fluid outputs were measured, and all complications were tallied. Average follow-up was 1 year. Results showed the majority of patients to be women, aged 17 to 74 years old. Of the preoperative weights, 98.7 percent were within 50 pounds of ideal chart weight. Total aspirate volumes ranged from 3 to 17 liters, with 94.5 percent of these under 10 liters. Fluid balance measurements showed an average of 120 cc/kg positive fluid balance at the end of the procedure, with none of these patients experiencing any significant fluid balance abnormalities. Cosmetic results

  7. A scalable messaging system for accelerating discovery from large scale scientific simulations

    SciTech Connect

    Jin, Tong; Zhang, Fan; Parashar, Manish; Klasky, Scott A; Podhorszki, Norbert; Abbasi, Hasan

    2012-01-01

    Emerging scientific and engineering simulations running at scale on leadership-class High End Computing (HEC) environments are producing large volumes of data, which has to be transported and analyzed before any insights can result from these simulations. The complexity and cost (in terms of time and energy) associated with managing and analyzing this data have become significant challenges, and are limiting the impact of these simulations. Recently, data-staging approaches along with in-situ and in-transit analytics have been proposed to address these challenges by offloading I/O and/or moving data processing closer to the data. However, scientists continue to be overwhelmed by the large data volumes and data rates. In this paper we address this latter challenge. Specifically, we propose a highly scalable and low-overhead associative messaging framework that runs on the data staging resources within the HEC platform, and builds on the staging-based online in-situ/in- transit analytics to provide publish/subscribe/notification-type messaging patterns to the scientist. Rather than having to ingest and inspect the data volumes, this messaging system allows scientists to (1) dynamically subscribe to data events of interest, e.g., simple data values or a complex function or simple reduction (max()/min()/avg()) of the data values in a certain region of the application domain is greater/less than a threshold value, or certain spatial/temporal data features or data patterns are detected; (2) define customized in-situ/in-transit actions that are triggered based on the events, such as data visualization or transformation; and (3) get notified when these events occur. The key contribution of this paper is a design and implementation that can support such a messaging abstraction at scale on high- end computing (HEC) systems with minimal overheads. We have implemented and deployed the messaging system on the Jaguar Cray XK6 machines at Oak Ridge National Laboratory and the

  8. Multi-Rate Digital Control Systems with Simulation Applications. Volume II. Computer Algorithms

    DTIC Science & Technology

    1980-09-01

    34 ~AFWAL-TR-80-31 01 • • Volume II L IL MULTI-RATE DIGITAL CONTROL SYSTEMS WITH SIMULATiON APPLICATIONS Volume II: Computer Algorithms DENNIS G. J...29 Ma -8 - Volume II. Computer Algorithms ~ / ’+ 44MWLxkQT N Uwe ~~ 4 ~jjskYIF336l5-79-C-369~ 9. PER~rORMING ORGANIZATION NAME AND ADDRESS IPROG AMEL...additional options. The analytical basis for the computer algorithms is discussed in Ref. 12. However, to provide a complete description of the program, some

  9. Large-eddy simulation of particle-laden atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Ilie, Marcel; Smith, Stefan Llewellyn

    2008-11-01

    Pollen dispersion in the atmospheric boundary layer (ABL) is numerically investigated using a hybrid large-eddy simulation (LES) Lagrangian approach. Interest in prediction of pollen dispersion stems from two reasons, the allergens in the pollen grains and increasing genetic manipulation of plants leading to the problem of cross pollination. An efficient Eulerian-Lagrangian particle dispersion algorithm for the prediction of pollen dispersion in the atmospheric boundary layer is outlined. The volume fraction of the dispersed phase is assumed to be small enough such that particle-particle collisions are negligible and properties of the carrier flow are not modified. Only the effect of turbulence on particle motion has to be taken into account (one-way coupling). Hence the continuous phase can be treated separate from the particulate phase. The continuous phase is determined by LES in the Eulerian frame of reference whereas the dispersed phase is simulated in a Lagrangian frame of reference. Numerical investigations are conducted for the convective, neutral and stable boundary layer as well different topographies. The results of the present study indicate that particles with small diameter size follow the flow streamlines, behaving as tracers, while particles with large diameter size tend to follow trajectories which are independent of the flow streamlines. Particles of ellipsoidal shape travel faster than the ones of spherical shape.

  10. Large-eddy simulation of bubble-driven plume in stably stratified flow.

    NASA Astrophysics Data System (ADS)

    Yang, Di; Chen, Bicheng; Socolofsky, Scott; Chamecki, Marcelo; Meneveau, Charles

    2015-11-01

    The interaction between a bubble-driven plume and stratified water column plays a vital role in many environmental and engineering applications. As the bubbles are released from a localized source, they induce a positive buoyancy flux that generates an upward plume. As the plume rises, it entrains ambient water, and when the plume rises to a higher elevation where the stratification-induced negative buoyancy is sufficient, a considerable fraction of the entrained fluid detrains, or peels, to form a downward outer plume and a lateral intrusion layer. In the case of multiphase plumes, the intrusion layer may also trap weakly buoyant particles (e.g., oil droplets in the case of a subsea accidental blowout). In this study, the complex plume dynamics is studied using large-eddy simulation (LES), with the flow field simulated by hybrid pseudospectral/finite-difference scheme, and the bubble and dye concentration fields simulated by finite-volume scheme. The spatial and temporal characteristics of the buoyant plume are studied, with a focus on the effects of different bubble buoyancy levels. The LES data provide useful mean plume statistics for evaluating the accuracy of 1-D engineering models for entrainment and peeling fluxes. Based on the insights learned from the LES, a new continuous peeling model is developed and tested. Study supported by the Gulf of Mexico Research Initiative (GoMRI).

  11. GMP Cryopreservation of Large Volumes of Cells for Regenerative Medicine: Active Control of the Freezing Process

    PubMed Central

    Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Gibbons, Stephanie; Morris, G. John

    2014-01-01

    Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to −60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze—viabilities at 93.4%±7.4%, viable cell numbers at 14.3±1.7 million nuclei/mL alginate, and protein secretion at 10.5±1.7

  12. GMP cryopreservation of large volumes of cells for regenerative medicine: active control of the freezing process.

    PubMed

    Massie, Isobel; Selden, Clare; Hodgson, Humphrey; Fuller, Barry; Gibbons, Stephanie; Morris, G John

    2014-09-01

    Cryopreservation protocols are increasingly required in regenerative medicine applications but must deliver functional products at clinical scale and comply with Good Manufacturing Process (GMP). While GMP cryopreservation is achievable on a small scale using a Stirling cryocooler-based controlled rate freezer (CRF) (EF600), successful large-scale GMP cryopreservation is more challenging due to heat transfer issues and control of ice nucleation, both complex events that impact success. We have developed a large-scale cryocooler-based CRF (VIA Freeze) that can process larger volumes and have evaluated it using alginate-encapsulated liver cell (HepG2) spheroids (ELS). It is anticipated that ELS will comprise the cellular component of a bioartificial liver and will be required in volumes of ∼2 L for clinical use. Sample temperatures and Stirling cryocooler power consumption was recorded throughout cooling runs for both small (500 μL) and large (200 mL) volume samples. ELS recoveries were assessed using viability (FDA/PI staining with image analysis), cell number (nuclei count), and function (protein secretion), along with cryoscanning electron microscopy and freeze substitution techniques to identify possible injury mechanisms. Slow cooling profiles were successfully applied to samples in both the EF600 and the VIA Freeze, and a number of cooling and warming profiles were evaluated. An optimized cooling protocol with a nonlinear cooling profile from ice nucleation to -60°C was implemented in both the EF600 and VIA Freeze. In the VIA Freeze the nucleation of ice is detected by the control software, allowing both noninvasive detection of the nucleation event for quality control purposes and the potential to modify the cooling profile following ice nucleation in an active manner. When processing 200 mL of ELS in the VIA Freeze-viabilities at 93.4% ± 7.4%, viable cell numbers at 14.3 ± 1.7 million nuclei/mL alginate, and protein secretion at 10.5 ± 1.7

  13. Incarceration of umbilical hernia: a rare complication of large volume paracentesis

    PubMed Central

    Khodarahmi, Iman; Shahid, Muhammad Usman; Contractor, Sohail

    2015-01-01

    We present two cases of umbilical hernia incarceration following large volume paracentesis (LVP) in patients with cirrhotic ascites. Both patients became symptomatic within 48 hours after the LVP. Although being rare, given the significantly higher mortality rate of cirrhotic patients undergoing emergent herniorrhaphy, this complication of LVP is potentially serious. Therefore, it is recommended that patients be examined closely for the presence of umbilical hernias before removal of ascitic fluid and an attempt should be made for external reduction of easily reducible hernias, if a hernia is present. PMID:26629305

  14. Fan-beam scanning laser optical computed tomography for large volume dosimetry

    NASA Astrophysics Data System (ADS)

    Dekker, K. H.; Battista, J. J.; Jordan, K. J.

    2017-05-01

    A prototype scanning-laser fan beam optical CT scanner is reported which is capable of high resolution, large volume dosimetry with reasonable scan time. An acylindrical, asymmetric aquarium design is presented which serves to 1) generate parallel-beam scan geometry, 2) focus light towards a small acceptance angle detector, and 3) avoid interference fringe-related artifacts. Preliminary experiments with uniform solution phantoms (11 and 15 cm diameter) and finger phantoms (13.5 mm diameter FEP tubing) demonstrate that the design allows accurate optical CT imaging, with optical CT measurements agreeing within 3% of independent Beer-Lambert law calculations.

  15. Cryogenic loading of large volume presses for high-pressure experimentation and synthesis of novel materials

    SciTech Connect

    Lipp, M J; Evans, W J; Yoo, C S

    2005-01-21

    We present an efficient easily implemented method for loading cryogenic fluids in a large volume press. We specifically apply this method to the high-pressure synthesis of an extended solid derived from CO using a Paris-Edinburgh cell. This method employs cryogenic cooling of Bridgman type WC anvils well insulated from other press components, condensation of the load gas within a brass annulus surrounding the gasket between the Bridgman anvils. We demonstrate the viability of the described approach by synthesizing macroscopic amounts (several milligrams) of polymeric CO-derived material, which were recovered to ambient conditions after compression of pure CO to 5 GPa or above.

  16. Large Volume, Optical and Opto-Mechanical Metrology Techniques for ISIM on JWST

    NASA Technical Reports Server (NTRS)

    Hadjimichael, Theo

    2015-01-01

    The final, flight build of the Integrated Science Instrument Module (ISIM) element of the James Webb Space Telescope is the culmination of years of work across many disciplines and partners. This paper covers the large volume, ambient, optical and opto-mechanical metrology techniques used to verify the mechanical integration of the flight instruments in ISIM, including optical pupil alignment. We present an overview of ISIM's integration and test program, which is in progress, with an emphasis on alignment and optical performance verification. This work is performed at NASA Goddard Space Flight Center, in close collaboration with the European Space Agency, the Canadian Space Agency, and the Mid-Infrared Instrument European Consortium.

  17. Capillary gas chromatographic analysis of nerve agents using large volume injections.

    PubMed

    Degenhardt-Langelaan, C E; Kientz, C E

    1996-02-02

    The use of large volume injections has been studied for the verification of intact organophosphorus chemical warfare agents in water samples. As the use of ethyl acetate caused severe detection problems new potential solvents were evaluated. With the developed procedure, the nerve agents sarin, tabun, soman, DFP and VX can be determined in freshly prepared water samples at ppt levels. Except for the nerve agent tabun all other agents added to the water samples were still present after 8 days at 20-60% levels, if the pH of the water sample is adjusted to ca. 5 shortly after sampling and adjusted to pH 7 for analysis.

  18. Incarceration of umbilical hernia: a rare complication of large volume paracentesis.

    PubMed

    Khodarahmi, Iman; Shahid, Muhammad Usman; Contractor, Sohail

    2015-09-01

    We present two cases of umbilical hernia incarceration following large volume paracentesis (LVP) in patients with cirrhotic ascites. Both patients became symptomatic within 48 hours after the LVP. Although being rare, given the significantly higher mortality rate of cirrhotic patients undergoing emergent herniorrhaphy, this complication of LVP is potentially serious. Therefore, it is recommended that patients be examined closely for the presence of umbilical hernias before removal of ascitic fluid and an attempt should be made for external reduction of easily reducible hernias, if a hernia is present.

  19. Alginate Hydrogel Microencapsulation Inhibits Devitrification and Enables Large-Volume Low-CPA Cell Vitrification.

    PubMed

    Huang, Haishui; Choi, Jung Kyu; Rao, Wei; Zhao, Shuting; Agarwal, Pranay; Zhao, Gang; He, Xiaoming

    2015-11-25

    Cryopreservation of stem cells is important to meet their ever-increasing demand by the burgeoning cell-based medicine. The conventional slow freezing for stem cell cryopreservation suffers from inevitable cell injury associated with ice formation and the vitrification (i.e., no visible ice formation) approach is emerging as a new strategy for cell cryopreservation. A major challenge to cell vitrification is intracellular ice formation (IIF, a lethal event to cells) induced by devitrification (i.e., formation of visible ice in previously vitrified solution) during warming the vitrified cells at cryogenic temperature back to super-zero temperatures. Consequently, high and toxic concentrations of penetrating cryoprotectants (i.e., high CPAs, up to ~8 M) and/or limited sample volumes (up to ~2.5 μl) have been used to minimize IIF during vitrification. We reveal that alginate hydrogel microencapsulation can effectively inhibit devitrification during warming. Our data show that if ice formation were minimized during cooling, IIF is negligible in alginate hydrogel-microencapsulated cells during the entire cooling and warming procedure of vitrification. This enables vitrification of pluripotent and multipotent stem cells with up to ~4 times lower concentration of penetrating CPAs (up to 2 M, low CPA) in up to ~100 times larger sample volume (up to ~250 μl, large volume).

  20. Alginate Hydrogel Microencapsulation Inhibits Devitrification and Enables Large-Volume Low-CPA Cell Vitrification

    PubMed Central

    Huang, Haishui; Choi, Jung Kyu; Rao, Wei; Zhao, Shuting; Agarwal, Pranay; Zhao, Gang

    2015-01-01

    Cryopreservation of stem cells is important to meet their ever-increasing demand by the burgeoning cell-based medicine. The conventional slow freezing for stem cell cryopreservation suffers from inevitable cell injury associated with ice formation and the vitrification (i.e., no visible ice formation) approach is emerging as a new strategy for cell cryopreservation. A major challenge to cell vitrification is intracellular ice formation (IIF, a lethal event to cells) induced by devitrification (i.e., formation of visible ice in previously vitrified solution) during warming the vitrified cells at cryogenic temperature back to super-zero temperatures. Consequently, high and toxic concentrations of penetrating cryoprotectants (i.e., high CPAs, up to ~8 M) and/or limited sample volumes (up to ~2.5 μl) have been used to minimize IIF during vitrification. We reveal that alginate hydrogel microencapsulation can effectively inhibit devitrification during warming. Our data show that if ice formation were minimized during cooling, IIF is negligible in alginate hydrogel-microencapsulated cells during the entire cooling and warming procedure of vitrification. This enables vitrification of pluripotent and multipotent stem cells with up to ~4 times lower concentration of penetrating CPAs (up to 2 M, low CPA) in up to ~100 times larger sample volume (up to ~250 μl, large volume). PMID:26640426