Science.gov

Sample records for advanced simulation methods

  1. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  2. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGES

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  3. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  4. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  5. Investigation of advanced fault insertion and simulator methods

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.; Cottrell, D.

    1986-01-01

    The cooperative agreement partly supported research leading to the open-literature publication cited. Additional efforts under the agreement included research into fault modeling of semiconductor devices. Results of this research are presented in this report which is summarized in the following paragraphs. As a result of the cited research, it appears that semiconductor failure mechanism data is abundant but of little use in developing pin-level device models. Failure mode data on the other hand does exist but is too sparse to be of any statistical use in developing fault models. What is significant in the failure mode data is that, unlike classical logic, MSI and LSI devices do exhibit more than 'stuck-at' and open/short failure modes. Specifically they are dominated by parametric failures and functional anomalies that can include intermittent faults and multiple-pin failures. The report discusses methods of developing composite pin-level models based on extrapolation of semiconductor device failure mechanisms, failure modes, results of temperature stress testing and functional modeling. Limitations of this model particularly with regard to determination of fault detection coverage and latency time measurement are discussed. Indicated research directions are presented.

  6. Algorithmic implementations of domain decomposition methods for the diffraction simulation of advanced photomasks

    NASA Astrophysics Data System (ADS)

    Adam, Konstantinos; Neureuther, Andrew R.

    2002-07-01

    The domain decomposition method developed in [1] is examined in more detail. This method enables rapid computer simulation of advanced photomask (alt. PSM, masks with OPC) scattering and transmission properties. Compared to 3D computer simulation, speed-up factors of approximately 400, and up to approximately 200,000 when using the look-up table approach, are possible. Combined with the spatial frequency properties of projection printing systems, it facilitates accurate computer simulation of the projected image (normalized mean square error of a typical image is only a fraction of 1%). Some esoteric accuracy issues of the method are addressed and the way to handle arbitrary, Manhattan-type mask layouts is presented. The method is shown to be valid for off-axis incidence. The cross-talk model developed in [1] is used in 3D mask simulations (2D layouts).

  7. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods.

    PubMed

    Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C

    2010-12-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  8. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  9. Advanced methods in global gyrokinetic full f particle simulation of tokamak transport

    SciTech Connect

    Ogando, F.; Heikkinen, J. A.; Henriksson, S.; Janhunen, S. J.; Kiviniemi, T. P.; Leerink, S.

    2006-11-30

    A new full f nonlinear gyrokinetic simulation code, named ELMFIRE, has been developed for simulating transport phenomena in tokamak plasmas. The code is based on a gyrokinetic particle-in-cell algorithm, which can consider electrons and ions jointly or separately, as well as arbitrary impurities. The implicit treatment of the ion polarization drift and the use of full f methods allow for simulations of strongly perturbed plasmas including wide orbit effects, steep gradients and rapid dynamic changes. This article presents in more detail the algorithms incorporated into ELMFIRE, as well as benchmarking comparisons to both neoclassical theory and other codes.Code ELMFIRE calculates plasma dynamics by following the evolution of a number of sample particles. Because of using an stochastic algorithm its results are influenced by statistical noise. The effect of noise on relevant magnitudes is analyzed.Turbulence spectra of FT-2 plasma has been calculated with ELMFIRE, obtaining results consistent with experimental data.

  10. Advanced simulation of digital filters

    NASA Astrophysics Data System (ADS)

    Doyle, G. S.

    1980-09-01

    An Advanced Simulation of Digital Filters has been implemented on the IBM 360/67 computer utilizing Tektronix hardware and software. The program package is appropriate for use by persons beginning their study of digital signal processing or for filter analysis. The ASDF programs provide the user with an interactive method by which filter pole and zero locations can be manipulated. Graphical output on both the Tektronix graphics screen and the Versatec plotter are provided to observe the effects of pole-zero movement.

  11. Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations

    SciTech Connect

    Tokuhiro, Akiro; Ruggles, Art; Pointer, David

    2015-01-22

    In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is ‘actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a ‘separate effects (computational) test’ and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed

  12. Advanced simulation methods to detect resonant frequency stack up in focal plane design

    NASA Astrophysics Data System (ADS)

    Adams, Craig; Malone, Neil R.; Torres, Raymond; Fajardo, Armando; Vampola, John; Drechsler, William; Parlato, Russell; Cobb, Christopher; Randolph, Max; Chiourn, Surath; Swinehart, Robert

    2014-09-01

    Wire used to connect focal plane electrical connections to external electrical circuitry can be modeled using the length, diameter and loop height to determine the resonant frequency. The design of the adjacent electric board and mounting platform can also be analyzed. The combined resonant frequency analysis can then be used to decouple the different component resonant frequencies to eliminate the potential for metal fatigue in the wires. It is important to note that the nominal maximum stress values that cause metal fatigue can be much less than the ultimate tensile stress limit or the yield stress limit and are degraded further at resonant frequencies. It is critical that tests be done to qualify designs that are not easily simulated due to material property variation and complex structures. Sine wave vibration testing is a critical component of qualification vibration and provides the highest accuracy in determining the resonant frequencies which can be reduced or uncorrelated improving the structural performance of the focal plane assembly by small changes in design damping or modern space material selection. Vibration flow down from higher levels of assembly needs consideration for intermediary hardware, which may amplify or attenuate the full up system vibration profile. A simple pass through of vibration requirements may result in over test or missing amplified resonant frequencies that can cause system failure. Examples are shown of metal wire fatigue such as discoloration and microscopic cracks which are visible at the submicron level by the use of a scanning electron microscope. While it is important to model and test resonant frequencies the Focal plane must also be constrained such that Coefficient of Thermal expansion mismatches are allowed to move and not overstress the FPA.

  13. Advanced Wellbore Thermal Simulator

    1992-03-04

    GEOTEMP2, which is based on the earlier GEOTEMP program, is a wellbore thermal simulator designed for geothermal well drilling and production applications. The code treats natural and forced convection and conduction within the wellbore and heat conduction within the surrounding rock matrix. A variety of well operations can be modeled including injection, production, forward and reverse circulation with gas or liquid, gas or liquid drilling, and two-phase steam injection and production. Well completion with severalmore » different casing sizes and cement intervals can be modeled. The code allows variables, such as flow rate, to change with time enabling a realistic treatment of well operations. Provision is made in the flow equations to allow the flow areas of the tubing to vary with depth in the wellbore. Multiple liquids can exist in GEOTEMP2 simulations. Liquid interfaces are tracked through the tubing and annulus as one liquid displaces another. GEOTEMP2, however, does not attempt to simulate displacement of liquids with a gas or two-phase steam or vice versa. This means that it is not possible to simulate an operation where the type of drilling fluid changes, e.g. mud going to air. GEOTEMP2 was designed primarily for use in predicting the behavior of geothermal wells, but it is flexible enough to handle many typical drilling, production, and injection problems in the oil industry as well. However, GEOTEMP2 does not allow the modeling of gas-filled annuli in production or injection problems. In gas or mist drilling, no radiation losses are included in the energy balance. No attempt is made to model flow in the formation. Average execution time is 50 CP seconds on a CDC CYBER170. This edition of GEOTEMP2 is designated as Version 2.0 by the contributors.« less

  14. Advances in Monte Carlo computer simulation

    NASA Astrophysics Data System (ADS)

    Swendsen, Robert H.

    2011-03-01

    Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.

  15. CFD simulation of automotive I.C. engines with advanced moving grid and multi-domain methods

    NASA Astrophysics Data System (ADS)

    Lai, Y. G.; Przekwas, A. J.; Sun, R. L. T.

    1993-07-01

    An efficient numerical method is presented with multi-domain and moving grid capabilities to best suit internal combustion engine applications. Multi-domain capability allows a user to arbitrarily cut the solution domain into many topologically simpler domains. Consequently, simultaneous coupling among components becomes natural and the task of grid generation becomes easier. The moving grid capability allows the computational grid to move and conform to the piston motion. As a result, the grid always fits the flow boundaries and no special remapping or interpolation is needed. The method has been implemented to solve 2D and 3D flows in a body-fitted coordinate system. Air ingestion and scavenging flow problems in a generic four-stroke engine and a two-stroke engine are simulated to demonstrate the proposed approach.

  16. Towards Direct Numerical Simulation of mass and energy fluxes at the soil-atmospheric interface with advanced Lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Krafczyk, Manfred; Geier, Martin; Schönherr, Martin

    2014-05-01

    The quantification of soil evaporation and of soil water content dynamics near the soil surface are critical in the physics of land-surface processes on many scales and are dominated by multi-component and multi-phase mass and energy fluxes between the ground and the atmosphere. Although it is widely recognized that both liquid and gaseous water movement are fundamental factors in the quantification of soil heat flux and surface evaporation, their computation has only started to be taken into account using simplified macroscopic models. As the flow field over the soil can be safely considered as turbulent, it would be natural to study the detailed transient flow dynamics by means of Large Eddy Simulation (LES [1]) where the three-dimensional flow field is resolved down to the laminar sub-layer. Yet this requires very fine resolved meshes allowing a grid resolution of at least one order of magnitude below the typical grain diameter of the soil under consideration. In order to gain reliable turbulence statistics, up to several hundred eddy turnover times have to be simulated which adds up to several seconds of real time. Yet, the time scale of the receding saturated water front dynamics in the soil is on the order of hours. Thus we are faced with the task of solving a transient turbulent flow problem including the advection-diffusion of water vapour over the soil-atmospheric interface represented by a realistic tomographic reconstruction of a real porous medium taken from laboratory probes. Our flow solver is based on the Lattice Boltzmann method (LBM) [2] which has been extended by a Cumulant approach similar to the one described in [3,4] to minimize the spurious coupling between the degrees of freedom in previous LBM approaches and can be used as an implicit LES turbulence model due to its low numerical dissipation and increased stability at high Reynolds numbers. The kernel has been integrated into the research code Virtualfluids [5] and delivers up to 30% of the

  17. Recent advances in lattice Boltzmann methods

    SciTech Connect

    Chen, S.; Doolen, G.D.; He, X.; Nie, X.; Zhang, R.

    1998-12-31

    In this paper, the authors briefly present the basic principles of lattice Boltzmann method and summarize recent advances of the method, including the application of the lattice Boltzmann method for fluid flows in MEMS and simulation of the multiphase mixing and turbulence.

  18. A hybrid-Vlasov model based on the current advance method for the simulation of collisionless magnetized plasma

    SciTech Connect

    Valentini, F. . E-mail: valentin@fis.unical.it; Travnicek, P.; Califano, F.; Hellinger, P.; Mangeney, A.

    2007-07-01

    We present a numerical scheme for the integration of the Vlasov-Maxwell system of equations for a non-relativistic plasma, in the hybrid approximation, where the Vlasov equation is solved for the ion distribution function and the electrons are treated as a fluid. In the Ohm equation for the electric field, effects of electron inertia have been retained, in order to include the small scale dynamics up to characteristic lengths of the order of the electron skin depth. The low frequency approximation is used by neglecting the time derivative of the electric field, i.e. the displacement current in the Ampere equation. The numerical algorithm consists in coupling the splitting method proposed by Cheng and Knorr in 1976 [C.Z. Cheng, G. Knorr, J. Comput. Phys. 22 (1976) 330-351.] and the current advance method (CAM) introduced by Matthews in 1994 [A.P. Matthews, J. Comput. Phys. 112 (1994) 102-116.] In its present version, the code solves the Vlasov-Maxwell equations in a five-dimensional phase space (2-D in the physical space and 3-D in the velocity space) and it is implemented in a parallel version to exploit the computational power of the modern massively parallel supercomputers. The structure of the algorithm and the coupling between the splitting method and the CAM method (extended to the hybrid case) is discussed in detail. Furthermore, in order to test the hybrid-Vlasov code, the numerical results on propagation and damping of linear ion-acoustic modes and time evolution of linear elliptically polarized Alfven waves (including the so-called whistler regime) are compared to the analytical solutions. Finally, the numerical results of the hybrid-Vlasov code on the parametric instability of Alfven waves are compared with those obtained using a two-fluid approach.

  19. Advanced numerical methods for the simulation of flows in heterogeneous porous media and their application to parallel computing

    SciTech Connect

    Rame, M.

    1990-01-01

    Flows in highly heterogeneous porous media arise in a variety of processes including enhanced oil recovery, in situ bioremediation of underground contaminants, transport in underground aquifers and transport through biological membranes. The common denominator of these processes is the transport (and possibly reaction) of a multi-component fluid in several phases. A new numerical methodology for the analysis of flows in heterogeneous porous media is presented. Cases of miscible and immiscible displacement are simulated to investigate the influence of the local heterogeneities on the flow paths. This numerical scheme allows for a fine description of the flowing medium and the concentration and saturation distributions thus generated show low numerical dispersion. If the size of the area of interest is a square of a thousand feet per side, geological information on the porous medium can be incorporated to a length scale of about one to two feet. The technique here introduced, Operator Splitting on Multiple Grids, solves the elliptic operators by a higher-order finite-element technique on a coarse grid that proves efficient and accurate in incorporating different scales of heterogeneities. This coarse solution is interpolated to a fine grid by a splines-under-tension technique. The equations for the conservation of species are solved on this fine grid (of approximately half a million cells) by a finite-difference technique yielding numerical dispersions of less than ten feet. Cases presented herein involve a single phase miscible flow, and liquid-phase immiscible displacements. Cases are presented for model distributions of physical properties and for porosity and permeability data taken from a real reservoir. Techniques for the extension of the methods to compressible flow situations and compositional simulations are discussed.

  20. Advancing Material Models for Automotive Forming Simulations

    NASA Astrophysics Data System (ADS)

    Vegter, H.; An, Y.; ten Horn, C. H. L. J.; Atzema, E. H.; Roelofsen, M. E.

    2005-08-01

    Simulations in automotive industry need more advanced material models to achieve highly reliable forming and springback predictions. Conventional material models implemented in the FEM-simulation models are not capable to describe the plastic material behaviour during monotonic strain paths with sufficient accuracy. Recently, ESI and Corus co-operate on the implementation of an advanced material model in the FEM-code PAMSTAMP 2G. This applies to the strain hardening model, the influence of strain rate, and the description of the yield locus in these models. A subsequent challenge is the description of the material after a change of strain path. The use of advanced high strength steels in the automotive industry requires a description of plastic material behaviour of multiphase steels. The simplest variant is dual phase steel consisting of a ferritic and a martensitic phase. Multiphase materials also contain a bainitic phase in addition to the ferritic and martensitic phase. More physical descriptions of strain hardening than simple fitted Ludwik/Nadai curves are necessary. Methods to predict plastic behaviour of single-phase materials use a simple dislocation interaction model based on the formed cells structures only. At Corus, a new method is proposed to predict plastic behaviour of multiphase materials have to take hard phases into account, which deform less easily. The resulting deformation gradients create geometrically necessary dislocations. Additional micro-structural information such as morphology and size of hard phase particles or grains is necessary to derive the strain hardening models for this type of materials. Measurements available from the Numisheet benchmarks allow these models to be validated. At Corus, additional measured values are available from cross-die tests. This laboratory test can attain critical deformations by large variations in blank size and processing conditions. The tests are a powerful tool in optimising forming simulations

  1. Computational Aerodynamic Simulations of an 840 ft/sec Tip Speed Advanced Ducted Propulsor Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of an 840 ft/sec tip speed, Advanced Ducted Propulsor fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, lownoise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15- foot Low Speed Wind Tunnel at the NASA Glenn Research Center, resulting in quality, detailed aerodynamic and acoustic measurement data. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating conditions simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, excluding a long core duct section downstream of the core inlet guide vane. As a result, only fan rotational speed and system bypass ratio, set by specifying static pressure downstream of the core inlet guide vane row, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. The computed blade row flow fields for all five fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive boundary layer separations or related secondary-flow problems. A few spanwise comparisons between

  2. Advanced diagnostic methods in avionics

    NASA Astrophysics Data System (ADS)

    Popyack, Leonard Joseph, Jr.

    Advanced diagnostic systems facilitate further enhancement of reliability and safety of modern aircraft. Unlike classical reliability analyses, addressing specific classes of systems or devices, this research is aimed at the development of methods for assessment of the individual reliability characteristics of particular system components subjected to their unique histories of operational conditions and exposure to adverse environmental factors. Individual reliability characteristics are crucial for the implementation of the most efficient maintenance practice of flight-critical system components, known as "condition-based maintenance." The dissertation presents hardware and software aspects of a computer-based system, Time-Stress Monitoring Device, developed to record, store, and analyze raw data characterizing operational and environmental conditions and performance of electro-mechanical flight control system components and aircraft electronics (avionics). Availability of this data facilitates formulation and solution of such diagnostic problems as estimation of the probability of failure and life expectancy of particular components, failure detection, identification, and prediction. Statistical aspects of system diagnostics are considered. Particular diagnostic procedures utilizing cluster analysis, Bayes' technique, and regression analysis are formulated. Laboratory and simulation experiment that verify the obtained results are provided.

  3. Advanced Space Shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1982-01-01

    A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.

  4. Implicit methods in particle simulation

    SciTech Connect

    Cohen, B.I.

    1982-03-16

    This paper surveys recent advances in the application of implicit integration schemes to particle simulation of plasmas. The use of implicit integration schemes is motivated by the goal of efficiently studying low-frequency plasma phenomena using a large timestep, while retaining accuracy and kinetics. Implicit schemes achieve numerical stability and provide selective damping of unwanted high-frequency waves. This paper reviews the implicit moment and direct implicit methods. Lastly, the merging of implicit methods with orbit averaging can result in additional computational savings.

  5. Recent advances in computer image generation simulation.

    PubMed

    Geltmacher, H E

    1988-11-01

    An explosion in flight simulator technology over the past 10 years is revolutionizing U.S. Air Force (USAF) operational training. The single, most important development has been in computer image generation. However, other significant advances are being made in simulator handling qualities, real-time computation systems, and electro-optical displays. These developments hold great promise for achieving high fidelity combat mission simulation. This article reviews the progress to date and predicts its impact, along with that of new computer science advances such as very high speed integrated circuits (VHSIC), on future USAF aircrew simulator training. Some exciting possibilities are multiship, full-mission simulators at replacement training units, miniaturized unit level mission rehearsal training simulators, onboard embedded training capability, and national scale simulator networking.

  6. Advanced Potential Energy Surfaces for Molecular Simulation.

    PubMed

    Albaugh, Alex; Boateng, Henry A; Bradshaw, Richard T; Demerdash, Omar N; Dziedzic, Jacek; Mao, Yuezhi; Margul, Daniel T; Swails, Jason; Zeng, Qiao; Case, David A; Eastman, Peter; Wang, Lee-Ping; Essex, Jonathan W; Head-Gordon, Martin; Pande, Vijay S; Ponder, Jay W; Shao, Yihan; Skylaris, Chris-Kriton; Todorov, Ilian T; Tuckerman, Mark E; Head-Gordon, Teresa

    2016-09-22

    Advanced potential energy surfaces are defined as theoretical models that explicitly include many-body effects that transcend the standard fixed-charge, pairwise-additive paradigm typically used in molecular simulation. However, several factors relating to their software implementation have precluded their widespread use in condensed-phase simulations: the computational cost of the theoretical models, a paucity of approximate models and algorithmic improvements that can ameliorate their cost, underdeveloped interfaces and limited dissemination in computational code bases that are widely used in the computational chemistry community, and software implementations that have not kept pace with modern high-performance computing (HPC) architectures, such as multicore CPUs and modern graphics processing units (GPUs). In this Feature Article we review recent progress made in these areas, including well-defined polarization approximations and new multipole electrostatic formulations, novel methods for solving the mutual polarization equations and increasing the MD time step, combining linear-scaling electronic structure methods with new QM/MM methods that account for mutual polarization between the two regions, and the greatly improved software deployment of these models and methods onto GPU and CPU hardware platforms. We have now approached an era where multipole-based polarizable force fields can be routinely used to obtain computational results comparable to state-of-the-art density functional theory while reaching sampling statistics that are acceptable when compared to that obtained from simpler fixed partial charge force fields.

  7. Advanced Potential Energy Surfaces for Molecular Simulation.

    PubMed

    Albaugh, Alex; Boateng, Henry A; Bradshaw, Richard T; Demerdash, Omar N; Dziedzic, Jacek; Mao, Yuezhi; Margul, Daniel T; Swails, Jason; Zeng, Qiao; Case, David A; Eastman, Peter; Wang, Lee-Ping; Essex, Jonathan W; Head-Gordon, Martin; Pande, Vijay S; Ponder, Jay W; Shao, Yihan; Skylaris, Chris-Kriton; Todorov, Ilian T; Tuckerman, Mark E; Head-Gordon, Teresa

    2016-09-22

    Advanced potential energy surfaces are defined as theoretical models that explicitly include many-body effects that transcend the standard fixed-charge, pairwise-additive paradigm typically used in molecular simulation. However, several factors relating to their software implementation have precluded their widespread use in condensed-phase simulations: the computational cost of the theoretical models, a paucity of approximate models and algorithmic improvements that can ameliorate their cost, underdeveloped interfaces and limited dissemination in computational code bases that are widely used in the computational chemistry community, and software implementations that have not kept pace with modern high-performance computing (HPC) architectures, such as multicore CPUs and modern graphics processing units (GPUs). In this Feature Article we review recent progress made in these areas, including well-defined polarization approximations and new multipole electrostatic formulations, novel methods for solving the mutual polarization equations and increasing the MD time step, combining linear-scaling electronic structure methods with new QM/MM methods that account for mutual polarization between the two regions, and the greatly improved software deployment of these models and methods onto GPU and CPU hardware platforms. We have now approached an era where multipole-based polarizable force fields can be routinely used to obtain computational results comparable to state-of-the-art density functional theory while reaching sampling statistics that are acceptable when compared to that obtained from simpler fixed partial charge force fields. PMID:27513316

  8. Advancing the LSST Operations Simulator

    NASA Astrophysics Data System (ADS)

    Saha, Abhijit; Ridgway, S. T.; Cook, K. H.; Delgado, F.; Chandrasekharan, S.; Petry, C. E.; Operations Simulator Group

    2013-01-01

    The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions (including weather and seeing), as well as additional scheduled and unscheduled downtime. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history database are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. This poster reports recent work which has focussed on an architectural restructuring of the code that will allow us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator will be used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities, and assist with performance margin investigations of the LSST system.

  9. Simulation Of Advanced Train Control Systems

    NASA Astrophysics Data System (ADS)

    Craven, Paul; Oman, Paul

    This paper describes an Advanced Train Control System (ATCS) simulation environment created using the Network Simulator 2 (ns-2) discrete event network simulation system. The ATCS model is verified using ATCS monitoring software, laboratory results and a comparison with a mathematical model of ATCS communications. The simulation results are useful in understanding ATCS communication characteristics and identifying protocol strengths, weaknesses, vulnerabilities and mitigation techniques. By setting up a suite of ns-2 scripts, an engineer can simulate hundreds of possible scenarios in the space of a few seconds to investigate failure modes and consequences.

  10. Advanced Vadose Zone Simulations Using TOUGH

    SciTech Connect

    Finsterle, S.; Doughty, C.; Kowalsky, M.B.; Moridis, G.J.; Pan,L.; Xu, T.; Zhang, Y.; Pruess, K.

    2007-02-01

    The vadose zone can be characterized as a complex subsurfacesystem in which intricate physical and biogeochemical processes occur inresponse to a variety of natural forcings and human activities. Thismakes it difficult to describe, understand, and predict the behavior ofthis specific subsurface system. The TOUGH nonisothermal multiphase flowsimulators are well-suited to perform advanced vadose zone studies. Theconceptual models underlying the TOUGH simulators are capable ofrepresenting features specific to the vadose zone, and of addressing avariety of coupled phenomena. Moreover, the simulators are integratedinto software tools that enable advanced data analysis, optimization, andsystem-level modeling. We discuss fundamental and computationalchallenges in simulating vadose zone processes, review recent advances inmodeling such systems, and demonstrate some capabilities of the TOUGHsuite of codes using illustrative examples.

  11. Advances in atomic oxygen simulation

    NASA Technical Reports Server (NTRS)

    Froechtenigt, Joseph F.; Bareiss, Lyle E.

    1990-01-01

    Atomic oxygen (AO) present in the atmosphere at orbital altitudes of 200 to 700 km has been shown to degrade various exposed materials on Shuttle flights. The relative velocity of the AO with the spacecraft, together with the AO density, combine to yield an environment consisting of a 5 eV beam energy with a flux of 10(exp 14) to 10(exp 15) oxygen atoms/sq cm/s. An AO ion beam apparatus that produces flux levels and energy similar to that encountered by spacecraft in low Earth orbit (LEO) has been in existence since 1987. Test data was obtained from the interaction of the AO ion beam with materials used in space applications (carbon, silver, kapton) and with several special coatings of interest deposited on various surfaces. The ultimate design goal of the AO beam simulation device is to produce neutral AO at sufficient flux levels to replicate on-orbit conditions. A newly acquired mass spectrometer with energy discrimination has allowed 5 eV neutral oxygen atoms to be separated and detected from the background of thermal oxygen atoms of approx 0.2 eV. Neutralization of the AO ion beam at 5 eV was shown at the Martin Marietta AO facility.

  12. Advanced epidemiologic and analytical methods.

    PubMed

    Albanese, E

    2016-01-01

    Observational studies are indispensable for etiologic research, and are key to test life-course hypotheses and improve our understanding of neurologic diseases that have long induction and latency periods. In recent years a plethora of advanced design and analytic techniques have been developed to strengthen the robustness and ultimately the validity of the results of observational studies, and to address their inherent proneness to bias. It is the responsibility of clinicians and researchers to critically appraise and appropriately contextualize the findings of the exponentially expanding scientific literature. This critical appraisal should be rooted in a thorough understanding of advanced epidemiologic methods and techniques commonly used to formulate and test relevant hypotheses and to keep bias at bay. PMID:27637951

  13. Center for Advanced Modeling and Simulation Intern

    ScienceCinema

    Gertman, Vanessa

    2016-07-12

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  14. Center for Advanced Modeling and Simulation Intern

    SciTech Connect

    Gertman, Vanessa

    2010-01-01

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  15. Methods of channeling simulation

    SciTech Connect

    Barrett, J.H.

    1989-06-01

    Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs.

  16. Dynamic Simulations of Advanced Fuel Cycles

    SciTech Connect

    Steven J. Piet; Brent W. Dixon; Jacob J. Jacobson; Gretchen E. Matthern; David E. Shropshire

    2011-03-01

    Years of performing dynamic simulations of advanced nuclear fuel cycle options provide insights into how they could work and how one might transition from the current once-through fuel cycle. This paper summarizes those insights from the context of the 2005 objectives and goals of the U.S. Advanced Fuel Cycle Initiative (AFCI). Our intent is not to compare options, assess options versus those objectives and goals, nor recommend changes to those objectives and goals. Rather, we organize what we have learned from dynamic simulations in the context of the AFCI objectives for waste management, proliferation resistance, uranium utilization, and economics. Thus, we do not merely describe “lessons learned” from dynamic simulations but attempt to answer the “so what” question by using this context. The analyses have been performed using the Verifiable Fuel Cycle Simulation of Nuclear Fuel Cycle Dynamics (VISION). We observe that the 2005 objectives and goals do not address many of the inherently dynamic discriminators among advanced fuel cycle options and transitions thereof.

  17. METC Gasifier Advanced Simulation (MGAS) model

    SciTech Connect

    Syamlal, M.; Bissett, L.A.

    1992-01-01

    Morgantown Energy Technology Center is developing an advanced moving-bed gasifier, which is the centerpiece of the Integrated Gasifier Combined-Cycle (IGCC) system, with the features of good efficiency, low cost, and minimal environmental impact. A mathematical model of the gasifier, the METC-Gasifier Advanced Simulation (MGAS) model, has been developed for the analysis and design of advanced gasifiers and other moving-bed gasifiers. This report contains the technical and the user manuals of the MGAS model. The MGAS model can describe the transient operation of coflow, counterflow, or fixed-bed gasifiers. It is a one-dimensional model and can simulate the addition and withdrawal of gas and solids at multiple locations in the bed, a feature essential for simulating beds with recycle. The model describes the reactor in terms of a gas phase and a solids (coal or char) phase. These phases may exist at different temperatures. The model considers several combustion, gasification, and initial stage reactions. The model consists of a set of mass balances for 14 gas species and three coal (pseudo-) species and energy balances for the gas and the solids phases. The resulting partial differential equations are solved using a finite difference technique.

  18. Onyx-Advanced Aeropropulsion Simulation Framework Created

    NASA Technical Reports Server (NTRS)

    Reed, John A.

    2001-01-01

    The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.

  19. Software Framework for Advanced Power Plant Simulations

    SciTech Connect

    John Widmann; Sorin Munteanu; Aseem Jain; Pankaj Gupta; Mark Moales; Erik Ferguson; Lewis Collins; David Sloan; Woodrow Fiveland; Yi-dong Lang; Larry Biegler; Michael Locke; Simon Lingard; Jay Yun

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. These include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.

  20. Recent advances of strong-strong beam-beam simulation

    SciTech Connect

    Qiang, Ji; Furman, Miguel A.; Ryne, Robert D.; Fischer, Wolfram; Ohmi,Kazuhito

    2004-09-15

    In this paper, we report on recent advances in strong-strong beam-beam simulation. Numerical methods used in the calculation of the beam-beam forces are reviewed. A new computational method to solve the Poisson equation on nonuniform grid is presented. This method reduces the computational cost by a half compared with the standard FFT based method on uniform grid. It is also more accurate than the standard method for a colliding beam with low transverse aspect ratio. In applications, we present the study of coherent modes with multi-bunch, multi-collision beam-beam interactions at RHIC. We also present the strong-strong simulation of the luminosity evolution at KEKB with and without finite crossing angle.

  1. CASL: The Consortium for Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Kothe, Douglas B.

    2010-11-01

    Like the fusion community, the nuclear engineering community is embarking on a new computational effort to create integrated, multiphysics simulations. The Consortium for Advanced Simulation of Light Water Reactors (CASL), one of 3 newly-funded DOE Energy Innovation Hubs, brings together an exceptionally capable team that will apply existing modeling and simulation capabilities and develop advanced capabilities to create a usable environment for predictive simulation of light water reactors (LWRs). This environment, designated the Virtual Reactor (VR), will: 1) Enable the use of leadership-class computing for engineering design and analysis to improve reactor capabilities, 2) Promote an enhanced scientific basis and understanding by replacing empirically based design and analysis tools with predictive capabilities, 3) Develop a highly integrated multiphysics environment for engineering analysis through increased fidelity methods, and 4) Incorporate UQ as a basis for developing priorities and supporting, application of the VR tools for predictive simulation. In this presentation, we present the plans for CASL and comment on the similarity and differences with the proposed Fusion Simulation Project (FSP).

  2. Interactive visualization to advance earthquake simulation

    USGS Publications Warehouse

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  3. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... airplane simulators. The requirements in this appendix are in addition to the simulator approval requirements in § 121.407. Each simulator used under this appendix must be approved as a Level B, C, or D simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  4. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... airplane simulators. The requirements in this appendix are in addition to the simulator approval requirements in § 121.407. Each simulator used under this appendix must be approved as a Level B, C, or D simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  5. Advanced studies on Simulation Methodologies for very Complicated Fracture Phenomena

    NASA Astrophysics Data System (ADS)

    Nishioka, Toshihisa

    2010-06-01

    Although nowadays, computational techniques are well developed, for Extremely Complicated Fracture Phenomena, they are still very difficult to simulate, for general engineers, researchers. To overcome many difficulties in those simulations, we have developed not only Simulation Methodologies but also theoretical basis and concepts. We sometimes observe extremely complicated fracture patterns, especially in dynamic fracture phenomena such as dynamic crack branching, kinking, curving, etc. For examples, although the humankind, from primitive men to modern scientists such as Albert Einstein had watched the post-mortem patterns of dynamic crack branching, the governing condition for the onset of the phenomena had been unsolved until our experimental study. From in these studies, we found the governing condition of dynamic crack bifurcation, as follows. When the total energy flux per unit time into a propagating crack tip reaches the material crack resistance, the crack braches into two cracks [total energy flux criterion]. The crack branches many times whenever the criterion is satisfied. Furthermore, the complexities also arise due to their time-dependence and/or their-deformation dependence. In order to make it possible to simulate such extremely complicated fracture phenomena, we developed many original advanced computational methods and technologies. These are (i)moving finite element method based on Delaunay automatic triangulation (MFEMBOAT), path independent,(ii) equivalent domain integral expression of the dynamic J integral associated with a continuous auxiliary function,(iii) Mixed phase path-prediction mode simulation, (iv) implicit path prediction criterion. In this paper, these advanced computational methods are thoroughly explained together with successful comparison with the experimental results. Since multiple dynamic crack branching phenomena may be most complicated fracture due to complicated fracture paths, and its time dependence (transient), this

  6. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review

    NASA Technical Reports Server (NTRS)

    Antonsson, Erik; Gombosi, Tamas

    2005-01-01

    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  7. Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance

    PubMed Central

    Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W.

    2016-01-01

    An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller—advanced fuzzy potential field method (AFPFM)—that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot. PMID:27123001

  8. Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance.

    PubMed

    Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W

    2016-01-01

    An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller--advanced fuzzy potential field method (AFPFM)--that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot. PMID:27123001

  9. Advanced reliability methods - A review

    NASA Astrophysics Data System (ADS)

    Forsyth, David S.

    2016-02-01

    There are a number of challenges to the current practices for Probability of Detection (POD) assessment. Some Nondestructive Testing (NDT) methods, especially those that are image-based, may not provide a simple relationship between a scalar NDT response and a damage size. Some damage types are not easily characterized by a single scalar metric. Other sensing paradigms, such as structural health monitoring, could theoretically replace NDT but require a POD estimate. And the cost of performing large empirical studies to estimate POD can be prohibitive. The response of the research community has been to develop new methods that can be used to generate the same information, POD, in a form that can be used by engineering designers. This paper will highlight approaches to image-based data and complex defects, Model Assisted POD estimation, and Bayesian methods for combining information. This paper will also review the relationship of the POD estimate, confidence bounds, tolerance bounds, and risk assessment.

  10. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and a means for achieving flightcrew training in advanced airplane simulators. The requirements in this appendix are in addition to the simulator approval requirements in § 121.407. Each simulator used under this appendix must be approved as a Level B, C, or D simulator, as appropriate....

  11. The Consortium for Advanced Simulation of Light Water Reactors

    SciTech Connect

    Ronaldo Szilard; Hongbin Zhang; Doug Kothe; Paul Turinsky

    2011-10-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is a DOE Energy Innovation Hub for modeling and simulation of nuclear reactors. It brings together an exceptionally capable team from national labs, industry and academia that will apply existing modeling and simulation capabilities and develop advanced capabilities to create a usable environment for predictive simulation of light water reactors (LWRs). This environment, designated as the Virtual Environment for Reactor Applications (VERA), will incorporate science-based models, state-of-the-art numerical methods, modern computational science and engineering practices, and uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs). It will couple state-of-the-art fuel performance, neutronics, thermal-hydraulics (T-H), and structural models with existing tools for systems and safety analysis and will be designed for implementation on both today's leadership-class computers and the advanced architecture platforms now under development by the DOE. CASL focuses on a set of challenge problems such as CRUD induced power shift and localized corrosion, grid-to-rod fretting fuel failures, pellet clad interaction, fuel assembly distortion, etc. that encompass the key phenomena limiting the performance of PWRs. It is expected that much of the capability developed will be applicable to other types of reactors. CASL's mission is to develop and apply modeling and simulation capabilities to address three critical areas of performance for nuclear power plants: (1) reduce capital and operating costs per unit energy by enabling power uprates and plant lifetime extension, (2) reduce nuclear waste volume generated by enabling higher fuel burnup, and (3) enhance nuclear safety by enabling high-fidelity predictive capability for component performance.

  12. Advanced Fine Particulate Characterization Methods

    SciTech Connect

    Steven Benson; Lingbu Kong; Alexander Azenkeng; Jason Laumb; Robert Jensen; Edwin Olson; Jill MacKenzie; A.M. Rokanuzzaman

    2007-01-31

    The characterization and control of emissions from combustion sources are of significant importance in improving local and regional air quality. Such emissions include fine particulate matter, organic carbon compounds, and NO{sub x} and SO{sub 2} gases, along with mercury and other toxic metals. This project involved four activities including Further Development of Analytical Techniques for PM{sub 10} and PM{sub 2.5} Characterization and Source Apportionment and Management, Organic Carbonaceous Particulate and Metal Speciation for Source Apportionment Studies, Quantum Modeling, and High-Potassium Carbon Production with Biomass-Coal Blending. The key accomplishments included the development of improved automated methods to characterize the inorganic and organic components particulate matter. The methods involved the use of scanning electron microscopy and x-ray microanalysis for the inorganic fraction and a combination of extractive methods combined with near-edge x-ray absorption fine structure to characterize the organic fraction. These methods have direction application for source apportionment studies of PM because they provide detailed inorganic analysis along with total organic and elemental carbon (OC/EC) quantification. Quantum modeling using density functional theory (DFT) calculations was used to further elucidate a recently developed mechanistic model for mercury speciation in coal combustion systems and interactions on activated carbon. Reaction energies, enthalpies, free energies and binding energies of Hg species to the prototype molecules were derived from the data obtained in these calculations. Bimolecular rate constants for the various elementary steps in the mechanism have been estimated using the hard-sphere collision theory approximation, and the results seem to indicate that extremely fast kinetics could be involved in these surface reactions. Activated carbon was produced from a blend of lignite coal from the Center Mine in North Dakota and

  13. Advanced Virtual Reality Simulations in Aerospace Education and Research

    NASA Astrophysics Data System (ADS)

    Plotnikova, L.; Trivailo, P.

    2002-01-01

    Recent research developments at Aerospace Engineering, RMIT University have demonstrated great potential for using Virtual Reality simulations as a very effective tool in advanced structures and dynamics applications. They have also been extremely successful in teaching of various undergraduate and postgraduate courses for presenting complex concepts in structural and dynamics designs. Characteristic examples are related to the classical orbital mechanics, spacecraft attitude and structural dynamics. Advanced simulations, reflecting current research by the authors, are mainly related to the implementation of various non-linear dynamic techniques, including using Kane's equations to study dynamics of space tethered satellite systems and the Co-rotational Finite Element method to study reconfigurable robotic systems undergoing large rotations and large translations. The current article will describe the numerical implementation of the modern methods of dynamics, and will concentrate on the post-processing stage of the dynamic simulations. Numerous examples of building Virtual Reality stand-alone animations, designed by the authors, will be discussed in detail. These virtual reality examples will include: The striking feature of the developed technology is the use of the standard mathematical packages, like MATLAB, as a post-processing tool to generate Virtual Reality Modelling Language files with brilliant interactive, graphics and audio effects. These stand-alone demonstration files can be run under Netscape or Microsoft Explorer and do not require MATLAB. Use of this technology enables scientists to easily share their results with colleagues using the Internet, contributing to the flexible learning development at schools and Universities.

  14. Advanced optical fiber communication simulations in electrotechnical engineering education

    NASA Astrophysics Data System (ADS)

    Vervaeke, Michael; Nguyen Thi, Cac; Thienpont, Hugo

    2004-10-01

    We present our efforts in education to apply advanced optical communication simulation software into our Electrical Engineering curriculum by implementing examples from theoretical courses with commercially available simulation software. Photonic design software is an interesting tool for the education of Engineers: these tools are able to simulate a huge variety of photonic components without major investments in student lab hardware. Moreover: some exotic phenomena ,which would usually involve specialty hardware, can be taught. We chose to implement VPItransmissionMaker from VPIsystems in the lab exercises for graduating Electrotechnical Engineers with majors in Photonics. The guideline we develop starts with basic examples provided by VPIsystems. The simplified simulation schemes serve as an introduction to the simulation techniques. Next, we highlight examples from the theoretical courses on Optical Telecommunications. A last part is an assignment where students have to design and simulate a system using real life component datasheets. The aim is to train them to interpret datasheets, to make design choices for their optical fiber system and to enhance their management skills. We detail our approach, highlight the educational aspects, the insight gained by the students, and illustrate our method with different examples.

  15. Precision Casting via Advanced Simulation and Manufacturing

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A two-year program was conducted to develop and commercially implement selected casting manufacturing technologies to enable significant reductions in the costs of castings, increase the complexity and dimensional accuracy of castings, and reduce the development times for delivery of high quality castings. The industry-led R&D project was cost shared with NASA's Aerospace Industry Technology Program (AITP). The Rocketdyne Division of Boeing North American, Inc. served as the team lead with participation from Lockheed Martin, Ford Motor Company, Howmet Corporation, PCC Airfoils, General Electric, UES, Inc., University of Alabama, Auburn University, Robinson, Inc., Aracor, and NASA-LeRC. The technical effort was organized into four distinct tasks. The accomplishments reported herein. Task 1.0 developed advanced simulation technology for core molding. Ford headed up this task. On this program, a specialized core machine was designed and built. Task 2.0 focused on intelligent process control for precision core molding. Howmet led this effort. The primary focus of these experimental efforts was to characterize the process parameters that have a strong impact on dimensional control issues of injection molded cores during their fabrication. Task 3.0 developed and applied rapid prototyping to produce near net shape castings. Rocketdyne was responsible for this task. CAD files were generated using reverse engineering, rapid prototype patterns were fabricated using SLS and SLA, and castings produced and evaluated. Task 4.0 was aimed at developing technology transfer. Rocketdyne coordinated this task. Casting related technology, explored and evaluated in the first three tasks of this program, was implemented into manufacturing processes.

  16. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Bruhwiler, David L.; Cary, John R.; Cowan, Benjamin M.; Paul, Kevin; Mullowney, Paul J.; Messmer, Peter; Geddes, Cameron G. R.; Esarey, Eric; Cormier-Michel, Estelle; Leemans, Wim; Vay, Jean-Luc

    2009-01-22

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating >10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of {approx}2,000 as compared to standard particle-in-cell.

  17. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Paul, K.; Cary, J.R.; Cowan, B.; Bruhwiler, D.L.; Geddes, C.G.R.; Mullowney, P.J.; Messmer, P.; Esarey, E.; Cormier-Michel, E.; Leemans, W.P.; Vay, J.-L.

    2008-09-10

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating>10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of ~;;2,000 as compared to standard particle-in-cell.

  18. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    SciTech Connect

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    2008-04-15

    The recent Nevada Earthquake (M=6) produced an extraordinary set of crustal guided waves. In this study, we examine the three-component data at all the USArray stations in terms of how well existing models perform in predicting the various phases, Rayleigh waves, Love waves, and Pnl waves. To establish the source parameters, we applied the Cut and Paste Code up to distance of 5° for an average local crustal model which produced a normal mechanism (strike=35°,dip=41°,rake=-85°) at a depth of 9 km and Mw=5.9. Assuming this mechanism, we generated synthetics at all distances for a number of 1D and 3D models. The Pnl observations fit the synthetics for the simple models well both in timing (VPn=7.9km/s) and waveform fits out to a distance of about 5°. Beyond this distance a great deal of complexity can be seen to the northwest apparently caused by shallow subducted slab material. These paths require considerable crustal thinning and higher P-velocities. Small delays and advances outline the various tectonic province to the south, Colorado Plateau, etc. with velocities compatible with that reported on by Song et al.(1996). Five-second Rayleigh waves (Airy Phase) can be observed throughout the whole array and show a great deal of variation ( up to 30s). In general, the Love waves are better behaved than the Rayleigh waves. We are presently adding higher frequency to the source description by including source complexity. Preliminary inversions suggest rupture to northeast with a shallow asperity. We are, also, inverting the aftershocks to extend the frequencies to 2 Hz and beyond following the calibration method outlined in Tan and Helmberger (2007). This will allow accurate directivity measurements for events with magnitude larger than 3.5. Thus, we will address the energy decay with distance as s function of frequency band for the various source types.

  19. Advanced analysis methods in particle physics

    SciTech Connect

    Bhat, Pushpalatha C.; /Fermilab

    2010-10-01

    Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.

  20. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  1. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  2. Hybrid and Electric Advanced Vehicle Systems Simulation

    NASA Technical Reports Server (NTRS)

    Beach, R. F.; Hammond, R. A.; Mcgehee, R. K.

    1985-01-01

    Predefined components connected to represent wide variety of propulsion systems. Hybrid and Electric Advanced Vehicle System (HEAVY) computer program is flexible tool for evaluating performance and cost of electric and hybrid vehicle propulsion systems. Allows designer to quickly, conveniently, and economically predict performance of proposed drive train.

  3. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Advanced Simulation H Appendix H to Part... REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Pt. 121, App. H Appendix H to Part 121—Advanced... ensure that all instructors and check airmen used in appendix H training and checking are...

  4. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Advanced Simulation H Appendix H to Part... REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Pt. 121, App. H Appendix H to Part 121—Advanced... ensure that all instructors and check airmen used in appendix H training and checking are...

  5. Simulator design for advanced ISDN satellite design and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerald R.

    1992-01-01

    This simulation design task completion report documents the simulation techniques associated with the network models of both the Interim Service ISDN (integrated services digital network) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures. The ISIS network model design represents satellite systems like the Advanced Communication Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) program, moves all control and switching functions on-board the next generation ISDN communication satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete events simulation experiments will be performed with these models using various traffic scenarios, design parameters and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  6. Molecular dynamics simulations: advances and applications

    PubMed Central

    Hospital, Adam; Goñi, Josep Ramon; Orozco, Modesto; Gelpí, Josep L

    2015-01-01

    Molecular dynamics simulations have evolved into a mature technique that can be used effectively to understand macromolecular structure-to-function relationships. Present simulation times are close to biologically relevant ones. Information gathered about the dynamic properties of macromolecules is rich enough to shift the usual paradigm of structural bioinformatics from studying single structures to analyze conformational ensembles. Here, we describe the foundations of molecular dynamics and the improvements made in the direction of getting such ensemble. Specific application of the technique to three main issues (allosteric regulation, docking, and structure refinement) is discussed. PMID:26604800

  7. Molecular dynamics simulations: advances and applications

    PubMed Central

    Hospital, Adam; Goñi, Josep Ramon; Orozco, Modesto; Gelpí, Josep L

    2015-01-01

    Molecular dynamics simulations have evolved into a mature technique that can be used effectively to understand macromolecular structure-to-function relationships. Present simulation times are close to biologically relevant ones. Information gathered about the dynamic properties of macromolecules is rich enough to shift the usual paradigm of structural bioinformatics from studying single structures to analyze conformational ensembles. Here, we describe the foundations of molecular dynamics and the improvements made in the direction of getting such ensemble. Specific application of the technique to three main issues (allosteric regulation, docking, and structure refinement) is discussed.

  8. Recent Advances in Binary Black Hole Merger Simulations

    NASA Technical Reports Server (NTRS)

    Barker, John

    2006-01-01

    Recent advances in numerical simulation techniques have lead to dramatic progress in understanding binary black hole merger radiation. I present recent results from simulations performed at Goddard, focusing on the gravitational radiation waveforms, and the application of these results to gravitational wave observations.

  9. Advanced Simulation and Computing Business Plan

    SciTech Connect

    Rummel, E.

    2015-07-09

    To maintain a credible nuclear weapons program, the National Nuclear Security Administration’s (NNSA’s) Office of Defense Programs (DP) needs to make certain that the capabilities, tools, and expert staff are in place and are able to deliver validated assessments. This requires a complete and robust simulation environment backed by an experimental program to test ASC Program models. This ASC Business Plan document encapsulates a complex set of elements, each of which is essential to the success of the simulation component of the Nuclear Security Enterprise. The ASC Business Plan addresses the hiring, mentoring, and retaining of programmatic technical staff responsible for building the simulation tools of the nuclear security complex. The ASC Business Plan describes how the ASC Program engages with industry partners—partners upon whom the ASC Program relies on for today’s and tomorrow’s high performance architectures. Each piece in this chain is essential to assure policymakers, who must make decisions based on the results of simulations, that they are receiving all the actionable information they need.

  10. PROBLEM OF COMPLEX EIGENSYSTEMS INTHE SEMIANALYTICAL SOLUTION FOR ADVANCEMENT OF TIME IN SOLUTE TRANSPORT SIMULATIONS: A NEW METHOD USING REAL ARITHMETIC.

    USGS Publications Warehouse

    Umari, Amjad M. J.; Gorelick, Steven M.

    1986-01-01

    In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous inthe governing differnetial equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i. e. , have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersionequation. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.

  11. Simulation Toolkit for Renewable Energy Advanced Materials Modeling

    SciTech Connect

    Sides, Scott; Kemper, Travis; Larsen, Ross; Graf, Peter

    2013-11-13

    STREAMM is a collection of python classes and scripts that enables and eases the setup of input files and configuration files for simulations of advanced energy materials. The core STREAMM python classes provide a general framework for storing, manipulating and analyzing atomic/molecular coordinates to be used in quantum chemistry and classical molecular dynamics simulations of soft materials systems. The design focuses on enabling the interoperability of materials simulation codes such as GROMACS, LAMMPS and Gaussian.

  12. New scene projector developments at the AMRDEC's advanced simulation center

    NASA Astrophysics Data System (ADS)

    Saylor, Daniel A.; Bowden, Mark; Buford, James

    2006-05-01

    The Aviation and Missile Research, Engineering, and Development Center's (AMRDEC) System Simulation and Development Directorate (SS&DD) has an extensive history of applying all types of modeling and simulation (M&S) to weapon system development and has been a particularly strong advocate of hardware-in-the-loop (HWIL) simulation and test for many years. Key to the successful application of HWIL testing at AMRDEC has been the use of state-of-the-art Scene Projector technologies. This paper describes recent advancements over the past year within the AMRDEC Advanced Simulation Center (ASC) HWIL facilities with a specific emphasis on the state of the various IRSP technologies employed. Areas discussed include application of FMS-compatible IR projectors, advancements in hybrid and multi-spectral projectors, and characterization of existing and emerging technologies.

  13. Advances in NLTE Modeling for Integrated Simulations

    SciTech Connect

    Scott, H A; Hansen, S B

    2009-07-08

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, {Delta}n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  14. Editorial: Latest methods and advances in biotechnology.

    PubMed

    Lee, Sang Yup; Jungbauer, Alois

    2014-01-01

    The latest "Biotech Methods and Advances" special issue of Biotechnology Journal continues the BTJ tradition of featuring the latest breakthroughs in biotechnology. The special issue is edited by our Editors-in-Chief, Prof. Sang Yup Lee and Prof. Alois Jungbauer and covers a wide array of topics in biotechnology, including the perennial favorite workhorses of the biotech industry, Chinese hamster ovary (CHO) cell and Escherichia coli.

  15. Multiple time-scale methods in particle simulations of plasmas

    SciTech Connect

    Cohen, B.I.

    1985-02-14

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling.

  16. Process simulation for advanced composites production

    SciTech Connect

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coating techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.

  17. Use of advanced computers for aerodynamic flow simulation

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F.

    1980-01-01

    The current and projected use of advanced computers for large-scale aerodynamic flow simulation applied to engineering design and research is discussed. The design use of mature codes run on conventional, serial computers is compared with the fluid research use of new codes run on parallel and vector computers. The role of flow simulations in design is illustrated by the application of a three dimensional, inviscid, transonic code to the Sabreliner 60 wing redesign. Research computations that include a more complete description of the fluid physics by use of Reynolds averaged Navier-Stokes and large-eddy simulation formulations are also presented. Results of studies for a numerical aerodynamic simulation facility are used to project the feasibility of design applications employing these more advanced three dimensional viscous flow simulations.

  18. Interoperable Technologies for Advanced Petascale Simulations

    SciTech Connect

    Li, Xiaolin

    2013-01-14

    Our final report on the accomplishments of ITAPS at Stony Brook during period covered by the research award includes component service, interface service and applications. On the component service, we have designed and implemented a robust functionality for the Lagrangian tracking of dynamic interface. We have migrated the hyperbolic, parabolic and elliptic solver from stage-wise second order toward global second order schemes. We have implemented high order coupling between interface propagation and interior PDE solvers. On the interface service, we have constructed the FronTier application programer's interface (API) and its manual page using doxygen. We installed the FronTier functional interface to conform with the ITAPS specifications, especially the iMesh and iMeshP interfaces. On applications, we have implemented deposition and dissolution models with flow and implemented the two-reactant model for a more realistic precipitation at the pore level and its coupling with Darcy level model. We have continued our support to the study of fluid mixing problem for problems in inertial comfinement fusion. We have continued our support to the MHD model and its application to plasma liner implosion in fusion confinement. We have simulated a step in the reprocessing and separation of spent fuels from nuclear power plant fuel rods. We have implemented the fluid-structure interaction for 3D windmill and parachute simulations. We have continued our collaboration with PNNL, BNL, LANL, ORNL, and other SciDAC institutions.

  19. Advanced Bayesian Method for Planetary Surface Navigation

    NASA Technical Reports Server (NTRS)

    Center, Julian

    2015-01-01

    Autonomous Exploration, Inc., has developed an advanced Bayesian statistical inference method that leverages current computing technology to produce a highly accurate surface navigation system. The method combines dense stereo vision and high-speed optical flow to implement visual odometry (VO) to track faster rover movements. The Bayesian VO technique improves performance by using all image information rather than corner features only. The method determines what can be learned from each image pixel and weighs the information accordingly. This capability improves performance in shadowed areas that yield only low-contrast images. The error characteristics of the visual processing are complementary to those of a low-cost inertial measurement unit (IMU), so the combination of the two capabilities provides highly accurate navigation. The method increases NASA mission productivity by enabling faster rover speed and accuracy. On Earth, the technology will permit operation of robots and autonomous vehicles in areas where the Global Positioning System (GPS) is degraded or unavailable.

  20. Alignment and Initial Operation of an Advanced Solar Simulator

    NASA Technical Reports Server (NTRS)

    Jaworske, Donald A.; Jefferies, Kent S.; Mason, Lee S.

    1996-01-01

    A solar simulator utilizing nine 30-kW xenon arc lamps was built to provide radiant power for testing a solar dynamic space power system in a thermal vacuum environment. The advanced solar simulator achieved the following values specific to the solar dynamic system: (1) a subtense angle of 1 deg; (2) the ability to vary solar simulator intensity up to 1.7 kW/sq m; (3) a beam diameter of 4.8 m; and (4) uniformity of illumination on the order of +/-10%. The flexibility of the solar simulator design allows for other potential uses of the facility.

  1. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    SciTech Connect

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    2008-06-17

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radial components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.

  2. Recent advances in analytical methods for mycotoxins.

    PubMed

    Gilbert, J

    1993-01-01

    Recent advances in analytical methods are reviewed using the examples of aflatoxins and trichothecene mycotoxins. The most dramatic advances are seen as being those based on immunological principles utilized for aflatoxins to produce simple screening methods and for rapid specific clean-up. The possibilities of automation using immunoaffinity columns is described. In contrast for the trichothecenes immunological methods have not had the same general impact. Post-column derivatization using bromine or iodine to enhance fluorescence for HPLC detection of aflatoxins has become widely employed and there are similar possibilities for improved HPLC detection for trichothecenes using electrochemical or trichothecene-specific post-column reactions. There have been improvements in the use of more rapid and specific clean-up methods for trichothecenes, whilst HPLC and GC remain equally favoured for the end-determination. More sophisticated instrumental techniques such as mass spectrometry (LC/MS, MS/MS) and supercritical fluid chromatography (SFC/MS) have been demonstrated to have potential for application to mycotoxin analysis, but have not as yet made much general impact.

  3. Computational simulation methods for composite fracture mechanics

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  4. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Pitsch, Heinz

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation; a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet transformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  5. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Heinz Pitsch

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high-fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation, a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet tranformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  6. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; El-Sharawy, El-Budawy; Hashemi-Yeganeh, Shahrokh; Aberle, James T.; Birtcher, Craig R.

    1991-01-01

    The Advanced Helicopter Electromagnetics is centered on issues that advance technology related to helicopter electromagnetics. Progress was made on three major topics: composite materials; precipitation static corona discharge; and antenna technology. In composite materials, the research has focused on the measurements of their electrical properties, and the modeling of material discontinuities and their effect on the radiation pattern of antennas mounted on or near material surfaces. The electrical properties were used to model antenna performance when mounted on composite materials. Since helicopter platforms include several antenna systems at VHF and UHF bands, measuring techniques are being explored that can be used to measure the properties at these bands. The effort on corona discharge and precipitation static was directed toward the development of a new two dimensional Voltage Finite Difference Time Domain computer program. Results indicate the feasibility of using potentials for simulating electromagnetic problems in the cases where potentials become primary sources. In antenna technology the focus was on Polarization Diverse Conformal Microstrip Antennas, Cavity Backed Slot Antennas, and Varactor Tuned Circular Patch Antennas. Numerical codes were developed for the analysis of two probe fed rectangular and circular microstrip patch antennas fed by resistive and reactive power divider networks.

  7. Brush seal numerical simulation: Concepts and advances

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-01-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  8. Brush seal numerical simulation: Concepts and advances

    NASA Astrophysics Data System (ADS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-07-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  9. Advanced Analysis Methods in High Energy Physics

    SciTech Connect

    Pushpalatha C. Bhat

    2001-10-03

    During the coming decade, high energy physics experiments at the Fermilab Tevatron and around the globe will use very sophisticated equipment to record unprecedented amounts of data in the hope of making major discoveries that may unravel some of Nature's deepest mysteries. The discovery of the Higgs boson and signals of new physics may be around the corner. The use of advanced analysis techniques will be crucial in achieving these goals. The author discusses some of the novel methods of analysis that could prove to be particularly valuable for finding evidence of any new physics, for improving precision measurements and for exploring parameter spaces of theoretical models.

  10. Interoperable Technologies for Advanced Petascale Simulations (ITAPS)

    SciTech Connect

    Shephard, Mark S

    2010-02-05

    Efforts during the past year have contributed to the continued development of the ITAPS interfaces and services as well as specific efforts to support ITAPS applications. The ITAPS interface efforts have two components. The first is working with the ITAPS team on improving the ITAPS software infrastructure and level of compliance of our implementations of ITAPS interfaces (iMesh, iMeshP, iRel and iGeom). The second is being involved with the discussions on the design of the iField fields interface. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. The development of parallel unstructured mesh methods has considered the need to scale unstructured mesh solves to massively parallel computers. These efforts, summarized in section 2.1 show that with the addition of the ITAPS procedures described in sections 2.2 and 2.3 we are able to obtain excellent strong scaling with our unstructured mesh CFD code on up to 294,912 cores of IBM Blue Gene/P which is the highest core count machine available. The ITAPS developments that have contributed to the scaling and performance of PHASTA include an iterative migration algorithm to improve the combined region and vertex balance of the mesh partition, which increases scalability, and mesh data reordering, which improves computational performance. The other developments are associated with the further development of the ITAPS parallel unstructured mesh

  11. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    SciTech Connect

    Helmberger, D; Tromp, J; Rodgers, A

    2007-07-16

    Comprehensive test ban monitoring in terms of location and discrimination has progressed significantly in recent years. However, the characterization of sources and the estimation of low yields remains a particular challenge. As the recent Korean shot demonstrated, we can probably expect to have a small set of teleseismic, far-regional and high-frequency regional data to analyze in estimating the yield of an event. Since stacking helps to bring signals out of the noise, it becomes useful to conduct comparable analyses on neighboring events, earthquakes in this case. If these auxiliary events have accurate moments and source descriptions, we have a means of directly comparing effective source strengths. Although we will rely on modeling codes, 1D, 2D, and 3D, we will also apply a broadband calibration procedure to use longer periods (P>5s) waveform data to calibrate short-period (P between .5 to 2 Hz) and high-frequency (P between 2 to 10 Hz) as path specify station corrections from well-known regional sources. We have expanded our basic Cut-and-Paste (CAP) methodology to include not only timing shifts but also amplitude (f) corrections at recording sites. The name of this method was derived from source inversions that allow timing shifts between 'waveform segments' (or cutting the seismogram up and re-assembling) to correct for crustal variation. For convenience, we will refer to these f-dependent refinements as CAP+ for (SP) and CAP++ for still higher frequency. These methods allow the retrieval of source parameters using only P-waveforms where radiation patterns are obvious as demonstrated in this report and are well suited for explosion P-wave data. The method is easily extended to all distances because it uses Green's function although there may be some changes required in t* to adjust for offsets between local vs. teleseismic distances. In short, we use a mixture of model-dependent and empirical corrections to tackle the path effects. Although we reply on the

  12. Advances in free-energy-based simulations of protein folding and ligand binding.

    PubMed

    Perez, Alberto; Morrone, Joseph A; Simmerling, Carlos; Dill, Ken A

    2016-02-01

    Free-energy-based simulations are increasingly providing the narratives about the structures, dynamics and biological mechanisms that constitute the fabric of protein science. Here, we review two recent successes. It is becoming practical: first, to fold small proteins with free-energy methods without knowing substructures and second, to compute ligand-protein binding affinities, not just their binding poses. Over the past 40 years, the timescales that can be simulated by atomistic MD are doubling every 1.3 years--which is faster than Moore's law. Thus, these advances are not simply due to the availability of faster computers. Force fields, solvation models and simulation methodology have kept pace with computing advancements, and are now quite good. At the tip of the spear recently are GPU-based computing, improved fast-solvation methods, continued advances in force fields, and conformational sampling methods that harness external information. PMID:26773233

  13. Advances in free-energy-based simulations of protein folding and ligand binding.

    PubMed

    Perez, Alberto; Morrone, Joseph A; Simmerling, Carlos; Dill, Ken A

    2016-02-01

    Free-energy-based simulations are increasingly providing the narratives about the structures, dynamics and biological mechanisms that constitute the fabric of protein science. Here, we review two recent successes. It is becoming practical: first, to fold small proteins with free-energy methods without knowing substructures and second, to compute ligand-protein binding affinities, not just their binding poses. Over the past 40 years, the timescales that can be simulated by atomistic MD are doubling every 1.3 years--which is faster than Moore's law. Thus, these advances are not simply due to the availability of faster computers. Force fields, solvation models and simulation methodology have kept pace with computing advancements, and are now quite good. At the tip of the spear recently are GPU-based computing, improved fast-solvation methods, continued advances in force fields, and conformational sampling methods that harness external information.

  14. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    SciTech Connect

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    2008-10-17

    This quarter, we have focused on several tasks: (1) Building a high-quality catalog of earthquake source parameters for the Middle East and East Asia. In East Asia, we computed source parameters using the CAP method for a set of events studied by Herrman et al., (MRR, 2006) using a complete waveform technique. Results indicated excellent agreement with the moment magnitudes in the range 3.5 -5.5. Below magnitude 3.5 the scatter increases. For events with more than 2-3 observations at different azimuths, we found good agreement of focal mechanisms. Depths were generally consistent, although differences of up to 10 km were found. These results suggest that CAP modeling provides estimates of source parameters at least as reliable as complete waveform modeling techniques. However, East Asia and the Yellow Sea Korean Paraplatform (YSKP) region studied are relatively laterally homogeneous and may not benefit from the CAP method’s flexibility to shift waveform segments to account for path-dependent model errors. A more challenging region to study is the Middle East where strong variations in sedimentary basin, crustal thickness and crustal and mantle seismic velocities greatly impact regional wave propagation. We applied the CAP method to a set of events in and around Iran and found good agreement between estimated focal mechanisms and those reported by the Global Centroid Moment Tensor (CMT) catalog. We found a possible bias in the moment magnitudes that may be due to the thick low-velocity crust in the Iranian Plateau. (2) Testing Methods on a Lifetime Regional Data Set. In particular, the recent 2/21/08 Nevada Event and Aftershock Sequence occurred in the middle of USArray, producing over a thousand records per event. The tectonic setting is quite similar to Central Iran and thus provides an excellent testbed for CAP+ at ranges out to 10°, including extensive observations of crustal thinning and thickening and various Pnl complexities. Broadband modeling in 1D, 2D

  15. Advances in beryllium powder consolidation simulation

    SciTech Connect

    Reardon, B.J.

    1998-12-01

    A fuzzy logic based multiobjective genetic algorithm (GA) is introduced and the algorithm is used to optimize micromechanical densification modeling parameters for warm isopressed beryllium powder, HIPed copper powder and CIPed/sintered and HIPed tantalum powder. In addition to optimizing the main model parameters using the experimental data points as objective functions, the GA provides a quantitative measure of the sensitivity of the model to each parameter, estimates the mean particle size of the powder, and determines the smoothing factors for the transition between stage 1 and stage 2 densification. While the GA does not provide a sensitivity analysis in the strictest sense, and is highly stochastic in nature, this method is reliable and reproducible in optimizing parameters given any size data set and determining the impact on the model of slight variations in each parameter.

  16. Modeling and simulation challenges pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)

    NASA Astrophysics Data System (ADS)

    Turinsky, Paul J.; Kothe, Douglas B.

    2016-05-01

    The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics "core simulator" based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M

  17. A Lattice Boltzmann Method for Turbomachinery Simulations

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Lopez, I.

    2003-01-01

    Lattice Boltzmann (LB) Method is a relatively new method for flow simulations. The start point of LB method is statistic mechanics and Boltzmann equation. The LB method tries to set up its model at molecular scale and simulate the flow at macroscopic scale. LBM has been applied to mostly incompressible flows and simple geometry.

  18. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  19. Advances in plant gene silencing methods.

    PubMed

    Pandey, Prachi; Senthil-Kumar, Muthappa; Mysore, Kirankumar S

    2015-01-01

    Understanding molecular mechanisms of transcriptional and posttranscriptional gene silencing pathways in plants over the past decades has led to development of tools and methods for silencing a target gene in various plant species. In this review chapter, both the recent understanding of molecular basis of gene silencing pathways and advances in various widely used gene silencing methods are compiled. We also discuss the salient features of the different methods like RNA interference (RNAi) and virus-induced gene silencing (VIGS) and highlight their advantages and disadvantages. Gene silencing technology is constantly progressing as reflected by rapidly emerging new methods. A succinct discussion on the recently developed methods like microRNA-mediated virus-induced gene silencing (MIR-VIGS) and microRNA-induced gene silencing (MIGS) is also provided. One major bottleneck in gene silencing approaches has been the associated off-target silencing. The other hurdle has been the lack of a universal approach that can be applied to all plants. For example, we face hurdles like incompatibility of VIGS vectors with the host and inability to use MIGS for plant species which are not easily transformable. However, the overwhelming research in this direction reflects the scope for overcoming the short comings of gene silencing technology.

  20. Advanced simulation model for IPM motor drive with considering phase voltage and stator inductance

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Myung; Park, Hyun-Jong; Lee, Ju

    2016-10-01

    This paper proposes an advanced simulation model of driving system for Interior Permanent Magnet (IPM) BrushLess Direct Current (BLDC) motors driven by 120-degree conduction method (two-phase conduction method, TPCM) that is widely used for sensorless control of BLDC motors. BLDC motors can be classified as SPM (Surface mounted Permanent Magnet) and IPM motors. Simulation model of driving system with SPM motors is simple due to the constant stator inductance regardless of the rotor position. Simulation models of SPM motor driving system have been proposed in many researches. On the other hand, simulation models for IPM driving system by graphic-based simulation tool such as Matlab/Simulink have not been proposed. Simulation study about driving system of IPMs with TPCM is complex because stator inductances of IPM vary with the rotor position, as permanent magnets are embedded in the rotor. To develop sensorless scheme or improve control performance, development of control algorithm through simulation study is essential, and the simulation model that accurately reflects the characteristic of IPM is required. Therefore, this paper presents the advanced simulation model of IPM driving system, which takes into account the unique characteristic of IPM due to the position-dependent inductances. The validity of the proposed simulation model is validated by comparison to experimental and simulation results using IPM with TPCM control scheme.

  1. Gasification CFD Modeling for Advanced Power Plant Simulations

    SciTech Connect

    Zitney, S.E.; Guenther, C.P.

    2005-09-01

    In this paper we have described recent progress on developing CFD models for two commercial-scale gasifiers, including a two-stage, coal slurry-fed, oxygen-blown, pressurized, entrained-flow gasifier and a scaled-up design of the PSDF transport gasifier. Also highlighted was NETL’s Advanced Process Engineering Co-Simulator for coupling high-fidelity equipment models with process simulation for the design, analysis, and optimization of advanced power plants. Using APECS, we have coupled the entrained-flow gasifier CFD model into a coal-fired, gasification-based FutureGen power and hydrogen production plant. The results for the FutureGen co-simulation illustrate how the APECS technology can help engineers better understand and optimize gasifier fluid dynamics and related phenomena that impact overall power plant performance.

  2. Indentation Methods in Advanced Materials Research Introduction

    SciTech Connect

    Pharr, George Mathews; Cheng, Yang-Tse; Hutchings, Ian; Sakai, Mototsugu; Moody, Neville; Sundararajan, G.; Swain, Michael V.

    2009-01-01

    Since its commercialization early in the 20th century, indentation testing has played a key role in the development of new materials and understanding their mechanical behavior. Progr3ess in the field has relied on a close marriage between research in the mechanical behavior of materials and contact mechanics. The seminal work of Hertz laid the foundations for bringing these two together, with his contributions still widely utilized today in examining elastic behavior and the physics of fracture. Later, the pioneering work of Tabor, as published in his classic text 'The Hardness of Metals', exapdned this understanding to address the complexities of plasticity. Enormous progress in the field has been achieved in the last decade, made possible both by advances in instrumentation, for example, load and depth-sensing indentation and scanning electron microscopy (SEM) and transmission electron microscopy (TEM) based in situ testing, as well as improved modeling capabilities that use computationally intensive techniques such as finite element analysis and molecular dynamics simulation. The purpose of this special focus issue is to present recent state of the art developments in the field.

  3. Simulation of a synergistic six-post motion system on the flight simulator for advanced aircraft at NASA-Ames

    NASA Technical Reports Server (NTRS)

    Bose, S. C.; Parris, B. L.

    1977-01-01

    Motion system drive philosophy and corresponding real-time software have been developed for the purpose of simulating the characteristics of a typical synergistic Six-Post Motion System (SPMS) on the Flight Simulator for Advanced Aircraft (FSAA) at NASA-Ames which is a non-synergistic motion system. This paper gives a brief description of these two types of motion systems and the general methods of producing motion cues of the FSAA. An actuator extension transformation which allows the simulation of a typical SPMS by appropriate drive washout and variable position limiting is described.

  4. Lessons Learned From Dynamic Simulations of Advanced Fuel Cycles

    SciTech Connect

    Steven J. Piet; Brent W. Dixon; Jacob J. Jacobson; Gretchen E. Matthern; David E. Shropshire

    2009-04-01

    Years of performing dynamic simulations of advanced nuclear fuel cycle options provide insights into how they could work and how one might transition from the current once-through fuel cycle. This paper summarizes those insights from the context of the 2005 objectives and goals of the Advanced Fuel Cycle Initiative (AFCI). Our intent is not to compare options, assess options versus those objectives and goals, nor recommend changes to those objectives and goals. Rather, we organize what we have learned from dynamic simulations in the context of the AFCI objectives for waste management, proliferation resistance, uranium utilization, and economics. Thus, we do not merely describe “lessons learned” from dynamic simulations but attempt to answer the “so what” question by using this context. The analyses have been performed using the Verifiable Fuel Cycle Simulation of Nuclear Fuel Cycle Dynamics (VISION). We observe that the 2005 objectives and goals do not address many of the inherently dynamic discriminators among advanced fuel cycle options and transitions thereof.

  5. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    SciTech Connect

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow in the wellbore

  6. Integration of Advanced Simulation and Visualization for Manufacturing Process Optimization

    NASA Astrophysics Data System (ADS)

    Zhou, Chenn; Wang, Jichao; Tang, Guangwu; Moreland, John; Fu, Dong; Wu, Bin

    2016-05-01

    The integration of simulation and visualization can provide a cost-effective tool for process optimization, design, scale-up and troubleshooting. The Center for Innovation through Visualization and Simulation (CIVS) at Purdue University Northwest has developed methodologies for such integration with applications in various manufacturing processes. The methodologies have proven to be useful for virtual design and virtual training to provide solutions addressing issues on energy, environment, productivity, safety, and quality in steel and other industries. In collaboration with its industrial partnerships, CIVS has provided solutions to companies, saving over US38 million. CIVS is currently working with the steel industry to establish an industry-led Steel Manufacturing Simulation and Visualization Consortium through the support of National Institute of Standards and Technology AMTech Planning Grant. The consortium focuses on supporting development and implementation of simulation and visualization technologies to advance steel manufacturing across the value chain.

  7. Advanced fault diagnosis methods in molecular networks.

    PubMed

    Habibi, Iman; Emamian, Effat S; Abdi, Ali

    2014-01-01

    Analysis of the failure of cell signaling networks is an important topic in systems biology and has applications in target discovery and drug development. In this paper, some advanced methods for fault diagnosis in signaling networks are developed and then applied to a caspase network and an SHP2 network. The goal is to understand how, and to what extent, the dysfunction of molecules in a network contributes to the failure of the entire network. Network dysfunction (failure) is defined as failure to produce the expected outputs in response to the input signals. Vulnerability level of a molecule is defined as the probability of the network failure, when the molecule is dysfunctional. In this study, a method to calculate the vulnerability level of single molecules for different combinations of input signals is developed. Furthermore, a more complex yet biologically meaningful method for calculating the multi-fault vulnerability levels is suggested, in which two or more molecules are simultaneously dysfunctional. Finally, a method is developed for fault diagnosis of networks based on a ternary logic model, which considers three activity levels for a molecule instead of the previously published binary logic model, and provides equations for the vulnerabilities of molecules in a ternary framework. Multi-fault analysis shows that the pairs of molecules with high vulnerability typically include a highly vulnerable molecule identified by the single fault analysis. The ternary fault analysis for the caspase network shows that predictions obtained using the more complex ternary model are about the same as the predictions of the simpler binary approach. This study suggests that by increasing the number of activity levels the complexity of the model grows; however, the predictive power of the ternary model does not appear to be increased proportionally.

  8. PIXE simulation: Models, methods and technologies

    SciTech Connect

    Batic, M.; Pia, M. G.; Saracco, P.; Weidenspointner, G.

    2013-04-19

    The simulation of PIXE (Particle Induced X-ray Emission) is discussed in the context of general-purpose Monte Carlo systems for particle transport. Dedicated PIXE codes are mainly concerned with the application of the technique to elemental analysis, but they lack the capability of dealing with complex experimental configurations. General-purpose Monte Carlo codes provide powerful tools to model the experimental environment in great detail, but so far they have provided limited functionality for PIXE simulation. This paper reviews recent developments that have endowed the Geant4 simulation toolkit with advanced capabilities for PIXE simulation, and related efforts for quantitative validation of cross sections and other physical parameters relevant to PIXE simulation.

  9. Requirements for advanced simulation of nuclear reactor and chemicalseparation plants.

    SciTech Connect

    Palmiotti, G.; Cahalan, J.; Pfeiffer, P.; Sofu, T.; Taiwo, T.; Wei,T.; Yacout, A.; Yang, W.; Siegel, A.; Insepov, Z.; Anitescu, M.; Hovland,P.; Pereira, C.; Regalbuto, M.; Copple, J.; Willamson, M.

    2006-12-11

    This report presents requirements for advanced simulation of nuclear reactor and chemical processing plants that are of interest to the Global Nuclear Energy Partnership (GNEP) initiative. Justification for advanced simulation and some examples of grand challenges that will benefit from it are provided. An integrated software tool that has its main components, whenever possible based on first principles, is proposed as possible future approach for dealing with the complex problems linked to the simulation of nuclear reactor and chemical processing plants. The main benefits that are associated with a better integrated simulation have been identified as: a reduction of design margins, a decrease of the number of experiments in support of the design process, a shortening of the developmental design cycle, and a better understanding of the physical phenomena and the related underlying fundamental processes. For each component of the proposed integrated software tool, background information, functional requirements, current tools and approach, and proposed future approaches have been provided. Whenever possible, current uncertainties have been quoted and existing limitations have been presented. Desired target accuracies with associated benefits to the different aspects of the nuclear reactor and chemical processing plants were also given. In many cases the possible gains associated with a better simulation have been identified, quantified, and translated into economical benefits.

  10. Preface to advances in numerical simulation of plasmas

    NASA Astrophysics Data System (ADS)

    Parker, Scott E.; Chacon, Luis

    2016-10-01

    This Journal of Computational Physics Special Issue, titled "Advances in Numerical Simulation of Plasmas," presents a snapshot of the international state of the art in the field of computational plasma physics. The articles herein are a subset of the topics presented as invited talks at the 24th International Conference on the Numerical Simulation of Plasmas (ICNSP), August 12-14, 2015 in Golden, Colorado. The choice of papers was highly selective. The ICNSP is held every other year and is the premier scientific meeting in the field of computational plasma physics.

  11. Hybrid and electric advanced vehicle systems (heavy) simulation

    NASA Technical Reports Server (NTRS)

    Hammond, R. A.; Mcgehee, R. K.

    1981-01-01

    A computer program to simulate hybrid and electric advanced vehicle systems (HEAVY) is described. It is intended for use early in the design process: concept evaluation, alternative comparison, preliminary design, control and management strategy development, component sizing, and sensitivity studies. It allows the designer to quickly, conveniently, and economically predict the performance of a proposed drive train. The user defines the system to be simulated using a library of predefined component models that may be connected to represent a wide variety of propulsion systems. The development of three models are discussed as examples.

  12. Advanced 3D Photocathode Modeling and Simulations Final Report

    SciTech Connect

    Dimitre A Dimitrov; David L Bruhwiler

    2005-06-06

    High brightness electron beams required by the proposed Next Linear Collider demand strong advances in photocathode electron gun performance. Significant improvement in the production of such beams with rf photocathode electron guns is hampered by the lack high-fidelity simulations. The critical missing piece in existing gun codes is a physics-based, detailed treatment of the very complex and highly nonlinear photoemission process.

  13. Advanced continuous cultivation methods for systems microbiology.

    PubMed

    Adamberg, Kaarel; Valgepea, Kaspar; Vilu, Raivo

    2015-09-01

    Increasing the throughput of systems biology-based experimental characterization of in silico-designed strains has great potential for accelerating the development of cell factories. For this, analysis of metabolism in the steady state is essential as only this enables the unequivocal definition of the physiological state of cells, which is needed for the complete description and in silico reconstruction of their phenotypes. In this review, we show that for a systems microbiology approach, high-resolution characterization of metabolism in the steady state--growth space analysis (GSA)--can be achieved by using advanced continuous cultivation methods termed changestats. In changestats, an environmental parameter is continuously changed at a constant rate within one experiment whilst maintaining cells in the physiological steady state similar to chemostats. This increases the resolution and throughput of GSA compared with chemostats, and, moreover, enables following of the dynamics of metabolism and detection of metabolic switch-points and optimal growth conditions. We also describe the concept, challenge and necessary criteria of the systematic analysis of steady-state metabolism. Finally, we propose that such systematic characterization of the steady-state growth space of cells using changestats has value not only for fundamental studies of metabolism, but also for systems biology-based metabolic engineering of cell factories.

  14. Advanced electromagnetic methods for aerospace vehicles

    NASA Astrophysics Data System (ADS)

    Balanis, Constantine A.; Sun, Weimin; El-Sharawy, El-Budawy; Aberle, James T.; Birtcher, Craig R.; Peng, Jian; Tirkas, Panayiotis A.; Kokotoff, David; Zavosh, Frank

    1993-06-01

    The Advanced Helicopter Electromagnetics (AHE) Industrial Associates Program has continuously progressed with its research effort focused on subjects identified and recommended by the Advisory Task Force of the program. The research activities in this reporting period have been steered toward practical helicopter electromagnetic problems, such as HF antenna problems and antenna efficiencies, recommended by the AHE members at the annual conference held at Arizona State University on 28-29 Oct. 1992 and the last biannual meeting held at the Boeing Helicopter on 19-20 May 1993. The main topics addressed include the following: Composite Materials and Antenna Technology. The research work on each topic is closely tied with the AHE Consortium members' interests. Significant progress in each subject is reported. Special attention in the area of Composite Materials has been given to the following: modeling of material discontinuity and their effects on towel-bar antenna patterns; guidelines for composite material modeling by using the Green's function approach in the NEC code; measurements of towel-bar antennas grounded with a partially material-coated plate; development of 3-D volume mesh generator for modeling thick and volumetric dielectrics by using FD-TD method; FDTD modeling of horn antennas with composite E-plane walls; and antenna efficiency analysis for a horn antenna loaded with composite dielectric materials.

  15. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Sun, Weimin; El-Sharawy, El-Budawy; Aberle, James T.; Birtcher, Craig R.; Peng, Jian; Tirkas, Panayiotis A.; Kokotoff, David; Zavosh, Frank

    1993-01-01

    The Advanced Helicopter Electromagnetics (AHE) Industrial Associates Program has continuously progressed with its research effort focused on subjects identified and recommended by the Advisory Task Force of the program. The research activities in this reporting period have been steered toward practical helicopter electromagnetic problems, such as HF antenna problems and antenna efficiencies, recommended by the AHE members at the annual conference held at Arizona State University on 28-29 Oct. 1992 and the last biannual meeting held at the Boeing Helicopter on 19-20 May 1993. The main topics addressed include the following: Composite Materials and Antenna Technology. The research work on each topic is closely tied with the AHE Consortium members' interests. Significant progress in each subject is reported. Special attention in the area of Composite Materials has been given to the following: modeling of material discontinuity and their effects on towel-bar antenna patterns; guidelines for composite material modeling by using the Green's function approach in the NEC code; measurements of towel-bar antennas grounded with a partially material-coated plate; development of 3-D volume mesh generator for modeling thick and volumetric dielectrics by using FD-TD method; FDTD modeling of horn antennas with composite E-plane walls; and antenna efficiency analysis for a horn antenna loaded with composite dielectric materials.

  16. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Sun, Weimin; El-Sharawy, El-Budawy; Aberle, James T.; Birtcher, Craig R.; Peng, Jian; Tirkas, Panayiotis A.; Andrew, William V.; Kokotoff, David; Zavosh, Frank

    1993-01-01

    The Advanced Helicopter Electromagnetics (AHE) Industrial Associates Program has fruitfully completed its fourth year. Under the support of the AHE members and the joint effort of the research team, new and significant progress has been achieved in the year. Following the recommendations by the Advisory Task Force, the research effort is placed on more practical helicopter electromagnetic problems, such as HF antennas, composite materials, and antenna efficiencies. In this annual report, the main topics to be addressed include composite materials and antenna technology. The research work on each topic has been driven by the AHE consortium members' interests and needs. The remarkable achievements and progresses in each subject is reported respectively in individual sections of the report. The work in the area of composite materials includes: modeling of low conductivity composite materials by using Green's function approach; guidelines for composite material modeling by using the Green's function approach in the NEC code; development of 3-D volume mesh generator for modeling thick and volumetric dielectrics by using FD-TD method; modeling antenna elements mounted on a composite Comanche tail stabilizer; and antenna pattern control and efficiency estimate for a horn antenna loaded with composite dielectric materials.

  17. Advanced Electromagnetic Methods for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Polycarpou, Anastasis; Birtcher, Craig R.; Georgakopoulos, Stavros; Han, Dong-Ho; Ballas, Gerasimos

    1999-01-01

    The imminent destructive threats of Lightning on helicopters and other airborne systems has always been a topic of great interest to this research grant. Previously, the lightning induced currents on the surface of the fuselage and its interior were predicted using the finite-difference time-domain (FDTD) method as well as the NEC code. The limitations of both methods, as applied to lightning, were identified and extensively discussed in the last meeting. After a thorough investigation of the capabilities of the FDTD, it was decided to incorporate into the numerical method a subcell model to accurately represent current diffusion through conducting materials of high conductivity and finite thickness. Because of the complexity of the model, its validity will be first tested for a one-dimensional FDTD problem. Although results are not available yet, the theory and formulation of the subcell model are presented and discussed here to a certain degree. Besides lightning induced currents in the interior of an aircraft, penetration of electromagnetic fields through apertures (e.g., windows and cracks) could also be devastating for the navigation equipment, electronics, and communications systems in general. The main focus of this study is understanding and quantifying field penetration through apertures. The simulation is done using the FDTD method and the predictions are compared with measurements and moment method solutions obtained from the NASA Langley Research Center. Cavity-backed slot (CBS) antennas or slot antennas in general have many applications in aircraft-satellite type of communications. These can be flushmounted on the surface of the fuselage and, therefore, they retain the aerodynamic shape of the aircraft. In the past, input impedance and radiation patterns of CBS antennas were computed using a hybrid FEM/MoM code. The analysis is now extended to coupling between two identical slot antennas mounted on the same structure. The predictions are performed

  18. Methods of sound simulation and applications in flight simulators

    NASA Technical Reports Server (NTRS)

    Gaertner, K. P.

    1980-01-01

    An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.

  19. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  20. Simulated herbivory advances autumn phenology in Acer rubrum

    NASA Astrophysics Data System (ADS)

    Forkner, Rebecca E.

    2014-05-01

    To determine the degree to which herbivory contributes to phenotypic variation in autumn phenology for deciduous trees, red maple ( Acer rubrum) branches were subjected to low and high levels of simulated herbivory and surveyed at the end of the season to assess abscission and degree of autumn coloration. Overall, branches with simulated herbivory abscised ˜7 % more leaves at each autumn survey date than did control branches within trees. While branches subjected to high levels of damage showed advanced phenology, abscission rates did not differ from those of undamaged branches within trees because heavy damage induced earlier leaf loss on adjacent branch nodes in this treatment. Damaged branches had greater proportions of leaf area colored than undamaged branches within trees, having twice the amount of leaf area colored at the onset of autumn and having ˜16 % greater leaf area colored in late October when nearly all leaves were colored. When senescence was scored as the percent of all leaves abscised and/or colored, branches in both treatments reached peak senescence earlier than did control branches within trees: dates of 50 % senescence occurred 2.5 days earlier for low herbivory branches and 9.7 days earlier for branches with high levels of simulated damage. These advanced rates are of the same time length as reported delays in autumn senescence and advances in spring onset due to climate warming. Thus, results suggest that should insect damage increase as a consequence of climate change, it may offset a lengthening of leaf life spans in some tree species.

  1. Simulated herbivory advances autumn phenology in Acer rubrum.

    PubMed

    Forkner, Rebecca E

    2014-05-01

    To determine the degree to which herbivory contributes to phenotypic variation in autumn phenology for deciduous trees, red maple (Acer rubrum) branches were subjected to low and high levels of simulated herbivory and surveyed at the end of the season to assess abscission and degree of autumn coloration. Overall, branches with simulated herbivory abscised ∼7 % more leaves at each autumn survey date than did control branches within trees. While branches subjected to high levels of damage showed advanced phenology, abscission rates did not differ from those of undamaged branches within trees because heavy damage induced earlier leaf loss on adjacent branch nodes in this treatment. Damaged branches had greater proportions of leaf area colored than undamaged branches within trees, having twice the amount of leaf area colored at the onset of autumn and having ~16 % greater leaf area colored in late October when nearly all leaves were colored. When senescence was scored as the percent of all leaves abscised and/or colored, branches in both treatments reached peak senescence earlier than did control branches within trees: dates of 50 % senescence occurred 2.5 days earlier for low herbivory branches and 9.7 days earlier for branches with high levels of simulated damage. These advanced rates are of the same time length as reported delays in autumn senescence and advances in spring onset due to climate warming. Thus, results suggest that should insect damage increase as a consequence of climate change, it may offset a lengthening of leaf life spans in some tree species.

  2. The Advanced Gamma-ray Imaging System (AGIS) - Simulation Studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Vassiliev, V. V.; Funk, S.; Konopelko, A.

    2008-12-24

    The Advanced Gamma-ray Imaging System (AGIS) is a US-led concept for a next-generation instrument in ground-based very-high-energy gamma-ray astronomy. The most important design requirement for AGIS is a sensitivity of about 10 times greater than current observatories like Veritas, H.E.S.S or MAGIC. We present results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance, collecting area, angular resolution, background rejection, and sensitivity are discussed.

  3. The Advanced Gamma-ray Imaging System (AGIS): Simulation studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Funk, S.; Konopelko, A.; Vassiliev, V.V.; /UCLA

    2011-06-14

    The Advanced Gamma-ray Imaging System (AGIS) is a next-generation ground-based gamma-ray observatory being planned in the U.S. The anticipated sensitivity of AGIS is about one order of magnitude better than the sensitivity of current observatories, allowing it to measure gamma-ray emission from a large number of Galactic and extra-galactic sources. We present here results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance - collecting area, angular resolution, background rejection, and sensitivity - are discussed.

  4. EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation

    SciTech Connect

    Fan, Xuetong

    2000-08-20

    Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.

  5. Simulation method for evaluating progressive addition lenses.

    PubMed

    Qin, Linling; Qian, Lin; Yu, Jingchi

    2013-06-20

    Since progressive addition lenses (PALs) are currently state-of-the-art in multifocal correction for presbyopia, it is important to study the methods for evaluating PALs. A nonoptical simulation method used to accurately characterize PALs during the design and optimization process is proposed in this paper. It involves the direct calculation of each surface of the lens according to the lens heights of front and rear surfaces. The validity of this simulation method for the evaluation of PALs is verified by the good agreement with Rotlex method. In particular, the simulation with a "correction action" included into the design process is potentially a useful method with advantages of time-saving, convenience, and accuracy. Based on the eye-plus-lens model, which is established through an accurate ray tracing calculation along the gaze direction, the method can find an excellent application in actually evaluating the wearer performance for optimal design of more comfortable, satisfactory, and personalized PALs. PMID:23842170

  6. Advanced simulations of optical transition and diffraction radiation

    NASA Astrophysics Data System (ADS)

    Aumeyr, T.; Billing, M. G.; Bobb, L. M.; Bolzon, B.; Bravin, E.; Karataev, P.; Kruchinin, K.; Lefevre, T.; Mazzoni, S.

    2015-04-01

    Charged particle beam diagnostics is a key task in modern and future accelerator installations. The diagnostic tools are practically the "eyes" of the operators. The precision and resolution of the diagnostic equipment are crucial to define the performance of the accelerator. Transition and diffraction radiation (TR and DR) are widely used for electron beam parameter monitoring. However, the precision and resolution of those devices are determined by how well the production, transport and detection of these radiation types are understood. This paper reports on simulations of TR and DR spatial-spectral characteristics using the physical optics propagation (POP) mode of the Zemax advanced optics simulation software. A good consistency with theory is demonstrated. Also, realistic optical system alignment issues are discussed.

  7. Why Video? How Technology Advances Method

    ERIC Educational Resources Information Center

    Downing, Martin J., Jr.

    2008-01-01

    This paper reports on the use of video to enhance qualitative research. Advances in technology have improved our ability to capture lived experiences through visual means. I reflect on my previous work with individuals living with HIV/AIDS, the results of which are described in another paper, to evaluate the effectiveness of video as a medium that…

  8. Molecular dynamic simulation methods for anisotropic liquids.

    PubMed

    Aoki, Keiko M; Yoneya, Makoto; Yokoyama, Hiroshi

    2004-03-22

    Methods of molecular dynamics simulations for anisotropic molecules are presented. The new methods, with an anisotropic factor in the cell dynamics, dramatically reduce the artifacts related to cell shapes and overcome the difficulties of simulating anisotropic molecules under constant hydrostatic pressure or constant volume. The methods are especially effective for anisotropic liquids, such as smectic liquid crystals and membranes, of which the stacks of layers are compressible (elastic in direction perpendicular to the layers) while the layer itself is liquid and only elastic under uniform compressive force. The methods can also be used for crystals and isotropic liquids as well.

  9. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  10. Advanced Techniques for Simulating the Behavior of Sand

    NASA Astrophysics Data System (ADS)

    Clothier, M.; Bailey, M.

    2009-12-01

    research is to simulate the look and behavior of sand, this work will go beyond simple particle collision. In particular, we can continue to use our parallel algorithms not only on single particles but on particle “clumps” that consist of multiple combined particles. Since sand is typically not spherical in nature, these particle “clumps” help to simulate the coarse nature of sand. In a simulation environment, multiple combined particles could be used to simulate the polygonal and granular nature of sand grains. Thus, a diversity of sand particles can be generated. The interaction between these particles can then be parallelized using GPU hardware. As such, this research will investigate different graphics and physics techniques and determine the tradeoffs in performance and visual quality for sand simulation. An enhanced sand model through the use of high performance computing and GPUs has great potential to impact research for both earth and space scientists. Interaction with JPL has provided an opportunity for us to refine our simulation techniques that can ultimately be used for their vehicle simulator. As an added benefit of this work, advancements in simulating sand can also benefit scientists here on earth, especially in regard to understanding landslides and debris flows.

  11. Graphics simulation and training aids for advanced teleoperation

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1993-01-01

    Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.

  12. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Sun, Weimin; El-Sharawy, El-Budawy; Aberle, James T.; Birtcher, Craig R.; Peng, Jian; Tirkas, Panayiotis A.

    1992-01-01

    The Advanced Helicopter Electromagnetics (AHE) Industrial Associates Program continues its research on variety of main topics identified and recommended by the Advisory Task Force of the program. The research activities center on issues that advance technology related to helicopter electromagnetics. While most of the topics are a continuation of previous works, special effort has been focused on some of the areas due to recommendations from the last annual conference. The main topics addressed in this report are: composite materials, and antenna technology. The area of composite materials continues getting special attention in this period. The research has focused on: (1) measurements of the electrical properties of low-conductivity materials; (2) modeling of material discontinuity and their effects on the scattering patterns; (3) preliminary analysis on interaction of electromagnetic fields with multi-layered graphite fiberglass plates; and (4) finite difference time domain (FDTD) modeling of fields penetration through composite panels of a helicopter.

  13. Method and apparatus for advancing tethers

    DOEpatents

    Zollinger, W.T.

    1998-06-02

    A tether puller for advancing a tether through a channel may include a bellows assembly having a leading end fixedly attached to the tether at a first position and a trailing end fixedly attached to the tether at a second position so that the leading and trailing ends of the bellows assembly are located a substantially fixed distance apart. The bellows assembly includes a plurality of independently inflatable elements each of which may be separately inflated to an extended position and deflated to a retracted position. Each of the independently inflatable elements expands radially and axially upon inflation. An inflation system connected to the independently inflatable elements inflates and deflates selected ones of the independently inflatable elements to cause the bellows assembly to apply a tractive force to the tether and advance it in the channel. 9 figs.

  14. Method and apparatus for advancing tethers

    DOEpatents

    Zollinger, W. Thor

    1998-01-01

    A tether puller for advancing a tether through a channel may include a bellows assembly having a leading end fixedly attached to the tether at a first position and a trailing end fixedly attached to the tether at a second position so that the leading and trailing ends of the bellows assembly are located a substantially fixed distance apart. The bellows assembly includes a plurality of independently inflatable elements each of which may be separately inflated to an extended position and deflated to a retracted position. Each of the independently inflatable elements expands radially and axially upon inflation. An inflation system connected to the independently inflatable elements inflates and deflates selected ones of the independently inflatable elements to cause the bellows assembly to apply a tractive force to the tether and advance it in the channel.

  15. A Virtual Engineering Framework for Simulating Advanced Power System

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Stanislav Borodai

    2008-06-18

    In this report is described the work effort performed to provide NETL with VE-Suite based Virtual Engineering software and enhanced equipment models to support NETL's Advanced Process Engineering Co-simulation (APECS) framework for advanced power generation systems. Enhancements to the software framework facilitated an important link between APECS and the virtual engineering capabilities provided by VE-Suite (e.g., equipment and process visualization, information assimilation). Model enhancements focused on improving predictions for the performance of entrained flow coal gasifiers and important auxiliary equipment (e.g., Air Separation Units) used in coal gasification systems. In addition, a Reduced Order Model generation tool and software to provide a coupling between APECS/AspenPlus and the GE GateCycle simulation system were developed. CAPE-Open model interfaces were employed where needed. The improved simulation capability is demonstrated on selected test problems. As part of the project an Advisory Panel was formed to provide guidance on the issues on which to focus the work effort. The Advisory Panel included experts from industry and academics in gasification, CO2 capture issues, process simulation and representatives from technology developers and the electric utility industry. To optimize the benefit to NETL, REI coordinated its efforts with NETL and NETL funded projects at Iowa State University, Carnegie Mellon University and ANSYS/Fluent, Inc. The improved simulation capabilities incorporated into APECS will enable researchers and engineers to better understand the interactions of different equipment components, identify weaknesses and processes needing improvement and thereby allow more efficient, less expensive plants to be developed and brought on-line faster and in a more cost-effective manner. These enhancements to APECS represent an important step toward having a fully integrated environment for performing plant simulation and engineering

  16. Exploring biomolecular dynamics and interactions using advanced sampling methods

    NASA Astrophysics Data System (ADS)

    Luitz, Manuel; Bomblies, Rainer; Ostermeir, Katja; Zacharias, Martin

    2015-08-01

    Molecular dynamics (MD) and Monte Carlo (MC) simulations have emerged as a valuable tool to investigate statistical mechanics and kinetics of biomolecules and synthetic soft matter materials. However, major limitations for routine applications are due to the accuracy of the molecular mechanics force field and due to the maximum simulation time that can be achieved in current simulations studies. For improving the sampling a number of advanced sampling approaches have been designed in recent years. In particular, variants of the parallel tempering replica-exchange methodology are widely used in many simulation studies. Recent methodological advancements and a discussion of specific aims and advantages are given. This includes improved free energy simulation approaches and conformational search applications.

  17. Advanced methods of structural and trajectory analysis for transport aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1995-01-01

    This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.

  18. Bootstrapping Methods Applied for Simulating Laboratory Works

    ERIC Educational Resources Information Center

    Prodan, Augustin; Campean, Remus

    2005-01-01

    Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…

  19. Investigations and advanced concepts on gyrotron interaction modeling and simulations

    NASA Astrophysics Data System (ADS)

    Avramidis, K. A.

    2015-12-01

    In gyrotron theory, the interaction between the electron beam and the high frequency electromagnetic field is commonly modeled using the slow variables approach. The slow variables are quantities that vary slowly in time in comparison to the electron cyclotron frequency. They represent the electron momentum and the high frequency field of the resonant TE modes in the gyrotron cavity. For their definition, some reference frequencies need to be introduced. These include the so-called averaging frequency, used to define the slow variable corresponding to the electron momentum, and the carrier frequencies, used to define the slow variables corresponding to the field envelopes of the modes. From the mathematical point of view, the choice of the reference frequencies is, to some extent, arbitrary. However, from the numerical point of view, there are arguments that point toward specific choices, in the sense that these choices are advantageous in terms of simulation speed and accuracy. In this paper, the typical monochromatic gyrotron operation is considered, and the numerical integration of the interaction equations is performed by the trajectory approach, since it is the fastest, and therefore it is the one that is most commonly used. The influence of the choice of the reference frequencies on the interaction simulations is studied using theoretical arguments, as well as numerical simulations. From these investigations, appropriate choices for the values of the reference frequencies are identified. In addition, novel, advanced concepts for the definitions of these frequencies are addressed, and their benefits are demonstrated numerically.

  20. Investigations and advanced concepts on gyrotron interaction modeling and simulations

    SciTech Connect

    Avramidis, K. A.

    2015-12-15

    In gyrotron theory, the interaction between the electron beam and the high frequency electromagnetic field is commonly modeled using the slow variables approach. The slow variables are quantities that vary slowly in time in comparison to the electron cyclotron frequency. They represent the electron momentum and the high frequency field of the resonant TE modes in the gyrotron cavity. For their definition, some reference frequencies need to be introduced. These include the so-called averaging frequency, used to define the slow variable corresponding to the electron momentum, and the carrier frequencies, used to define the slow variables corresponding to the field envelopes of the modes. From the mathematical point of view, the choice of the reference frequencies is, to some extent, arbitrary. However, from the numerical point of view, there are arguments that point toward specific choices, in the sense that these choices are advantageous in terms of simulation speed and accuracy. In this paper, the typical monochromatic gyrotron operation is considered, and the numerical integration of the interaction equations is performed by the trajectory approach, since it is the fastest, and therefore it is the one that is most commonly used. The influence of the choice of the reference frequencies on the interaction simulations is studied using theoretical arguments, as well as numerical simulations. From these investigations, appropriate choices for the values of the reference frequencies are identified. In addition, novel, advanced concepts for the definitions of these frequencies are addressed, and their benefits are demonstrated numerically.

  1. Advance particle and Doppler measurement methods

    NASA Technical Reports Server (NTRS)

    Busch, C.

    1985-01-01

    Particle environments, i.e., rain, ice, and snow particles are discussed. Two types of particles addressed are: (1) the natural environment in which airplanes fly and conduct test flights; and (2) simulation environments that are encountered in ground-test facilities such as wind tunnels, ranges, etc. There are characteristics of the natural environment that one wishes to measure. The liquid water content (LWC) is the one that seems to be of most importance; size distribution may be of importance in some applications. Like snow, the shape of the particle may be an important parameter to measure. As one goes on to environment in simulated tests, additional parameters may be required such as velocity distribution, the velocity lag of the particle relative to the aerodynamic flow, and the trajectory of the particle as it goes through the aerodynamic flow and impacts on the test object.

  2. A simple method for simulating gasdynamic systems

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.

    1991-01-01

    A simple method for performing digital simulation of gasdynamic systems is presented. The approach is somewhat intuitive, and requires some knowledge of the physics of the problem as well as an understanding of the finite difference theory. The method is explicitly shown in appendix A which is taken from the book by P.J. Roache, 'Computational Fluid Dynamics,' Hermosa Publishers, 1982. The resulting method is relatively fast while it sacrifices some accuracy.

  3. Spectral Methods in General Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Garrison, David

    2012-03-01

    In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.

  4. Advanced modeling and simulation to design and manufacture high performance and reliable advanced microelectronics and microsystems.

    SciTech Connect

    Nettleship, Ian (University of Pittsburgh, Pittsburgh, PA); Hinklin, Thomas; Holcomb, David Joseph; Tandon, Rajan; Arguello, Jose Guadalupe, Jr.; Dempsey, James Franklin; Ewsuk, Kevin Gregory; Neilsen, Michael K.; Lanagan, Michael (Pennsylvania State University, University Park, PA)

    2007-07-01

    An interdisciplinary team of scientists and engineers having broad expertise in materials processing and properties, materials characterization, and computational mechanics was assembled to develop science-based modeling/simulation technology to design and reproducibly manufacture high performance and reliable, complex microelectronics and microsystems. The team's efforts focused on defining and developing a science-based infrastructure to enable predictive compaction, sintering, stress, and thermomechanical modeling in ''real systems'', including: (1) developing techniques to and determining materials properties and constitutive behavior required for modeling; (2) developing new, improved/updated models and modeling capabilities, (3) ensuring that models are representative of the physical phenomena being simulated; and (4) assessing existing modeling capabilities to identify advances necessary to facilitate the practical application of Sandia's predictive modeling technology.

  5. Rainfall Simulation: methods, research questions and challenges

    NASA Astrophysics Data System (ADS)

    Ries, J. B.; Iserloh, T.

    2012-04-01

    In erosion research, rainfall simulations are used for the improvement of process knowledge as well as in the field for the assessment of overland flow generation, infiltration, and erosion rates. In all these fields of research, rainfall experiments have become an indispensable part of the research methods. In this context, small portable rainfall simulators with small test-plot sizes of one square-meter or even less, and devices of low weight and water consumption are in demand. Accordingly, devices with manageable technical effort like nozzle-type simulators seem to prevail against larger simulators. The reasons are obvious: lower costs and less time consumption needed for mounting enable a higher repetition rate. Regarding the high number of research questions, of different fields of application, and not least also due to the great technical creativity of our research staff, a large number of different experimental setups is available. Each of the devices produces a different rainfall, leading to different kinetic energy amounts influencing the soil surface and accordingly, producing different erosion results. Hence, important questions contain the definition, the comparability, the measurement and the simulation of natural rainfall and the problem of comparability in general. Another important discussion topic will be the finding of an agreement on an appropriate calibration method for the simulated rainfalls, in order to enable a comparison of the results of different rainfall simulator set-ups. In most of the publications, only the following "nice" sentence can be read: "Our rainfall simulator generates a rainfall spectrum that is similar to natural rainfall!". The most substantial and critical properties of a simulated rainfall are the drop-size distribution, the fall velocities of the drops, and the spatial distribution of the rainfall on the plot-area. In a comparison of the most important methods, the Laser Distrometer turned out to be the most up

  6. Matrix method for acoustic levitation simulation.

    PubMed

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort. PMID:21859587

  7. Reduced Basis Method for Nanodevices Simulation

    SciTech Connect

    Pau, George Shu Heng

    2008-05-23

    Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.

  8. A simulation method for the fruitage body

    NASA Astrophysics Data System (ADS)

    Lu, Ling; Song, Weng-lin; Wang, Lei

    2009-07-01

    An effective visual modeling for creating the fruitage body has been present. According to the geometry shape character of fruitage, we build up its face model base on ellipsoid deformation. The face model is relation with radius. We consider different radius become a face in the fruitage, and uses same method to simulate the shape of fruitage inside. The body model is formed by combine face model and radius direction. Our method can simulate virtual inter and outer structure for fruitage body. The method decreases a lot of data and increases display speed. Another, the texture model of fruitage is defined by sum of different base function. This kind of method is simple and speed. We show the feasibility of our method by creating a winter-jujube and an apricot. They include exocorp, mesocorp and endocarp. It is useful that develop virtual plant.

  9. Advanced Simulation Capability for Environmental Management (ASCEM): Early Site Demonstration

    SciTech Connect

    Meza, Juan; Hubbard, Susan; Freshley, Mark D.; Gorton, Ian; Moulton, David; Denham, Miles E.

    2011-03-07

    The U.S. Department of Energy Office of Environmental Management, Technology Innovation and Development (EM-32), is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high performance computing tool will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. As part of the initial development process, a series of demonstrations were defined to test ASCEM components and provide feedback to developers, engage end users in applications, and lead to an outcome that would benefit the sites. The demonstration was implemented for a sub-region of the Savannah River Site General Separations Area that includes the F-Area Seepage Basins. The physical domain included the unsaturated and saturated zones in the vicinity of the seepage basins and Fourmile Branch, using an unstructured mesh fit to the hydrostratigraphy and topography of the site. The calculations modeled variably saturated flow and the resulting flow field was used in simulations of the advection of non-reactive species and the reactive-transport of uranium. As part of the demonstrations, a new set of data management, visualization, and uncertainty quantification tools were developed to analyze simulation results and existing site data. These new tools can be used to provide summary statistics, including information on which simulation parameters were most important in the prediction of uncertainty and to visualize the relationships between model input and output.

  10. TID Simulation of Advanced CMOS Devices for Space Applications

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad

    2016-07-01

    This paper focuses on Total Ionizing Dose (TID) effects caused by accumulation of charges at silicon dioxide, substrate/silicon dioxide interface, Shallow Trench Isolation (STI) for scaled CMOS bulk devices as well as at Buried Oxide (BOX) layer in devices based on Silicon-On-Insulator (SOI) technology to be operated in space radiation environment. The radiation induced leakage current and corresponding density/concentration electrons in leakage current path was presented/depicted for 180nm, 130nm and 65nm NMOS, PMOS transistors based on CMOS bulk as well as SOI process technologies on-board LEO and GEO satellites. On the basis of simulation results, the TID robustness analysis for advanced deep sub-micron technologies was accomplished up to 500 Krad. The correlation between the impact of technology scaling and magnitude of leakage current with corresponding total dose was established utilizing Visual TCAD Genius program.

  11. Advanced particulate matter control apparatus and methods

    DOEpatents

    Miller, Stanley J.; Zhuang, Ye; Almlie, Jay C.

    2012-01-10

    Apparatus and methods for collection and removal of particulate matter, including fine particulate matter, from a gas stream, comprising a unique combination of high collection efficiency and ultralow pressure drop across the filter. The apparatus and method utilize simultaneous electrostatic precipitation and membrane filtration of a particular pore size, wherein electrostatic collection and filtration occur on the same surface.

  12. Simulated Interactive Research Experiments as Educational Tools for Advanced Science

    NASA Astrophysics Data System (ADS)

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M.; Hopf, Martin; Arndt, Markus

    2015-09-01

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields.

  13. Simulated Interactive Research Experiments as Educational Tools for Advanced Science.

    PubMed

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M; Hopf, Martin; Arndt, Markus

    2015-09-15

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields.

  14. Simulated Interactive Research Experiments as Educational Tools for Advanced Science

    PubMed Central

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M.; Hopf, Martin; Arndt, Markus

    2015-01-01

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields. PMID:26370627

  15. Advancing MODFLOW Applying the Derived Vector Space Method

    NASA Astrophysics Data System (ADS)

    Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.

    2015-12-01

    The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  16. Advanced spectral methods for climatic time series

    USGS Publications Warehouse

    Ghil, M.; Allen, M.R.; Dettinger, M.D.; Ide, K.; Kondrashov, D.; Mann, M.E.; Robertson, A.W.; Saunders, A.; Tian, Y.; Varadi, F.; Yiou, P.

    2002-01-01

    The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this review we describe the connections between time series analysis and nonlinear dynamics, discuss signal- to-noise enhancement, and present some of the novel methods for spectral analysis. The various steps, as well as the advantages and disadvantages of these methods, are illustrated by their application to an important climatic time series, the Southern Oscillation Index. This index captures major features of interannual climate variability and is used extensively in its prediction. Regional and global sea surface temperature data sets are used to illustrate multivariate spectral methods. Open questions and further prospects conclude the review.

  17. Method for Constructing Standardized Simulated Root Canals.

    ERIC Educational Resources Information Center

    Schulz-Bongert, Udo; Weine, Franklin S.

    1990-01-01

    The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)

  18. A Simulation Method Measuring Psychomotor Nursing Skills.

    ERIC Educational Resources Information Center

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  19. A Method for Simulating Bank Reconciliation

    ERIC Educational Resources Information Center

    Klemin, Vernon W.

    1974-01-01

    A method of simulation to tie check writing, making deposits, finding outstanding checks, receiving bank statements, and bank reconciliation into a process is presented as a way to convey to students a feeling of a procedure completed. A step-by-step teaching procedure and examples of bank statements are included. (AG)

  20. Advances in methods for deepwater TLP installations

    SciTech Connect

    Wybro, P.G.

    1995-10-01

    This paper describes a method suitable for installing deepwater TLP structures in water depths beyond 3,000 ft. An overview is presented of previous TLP installation, wherein an evaluation is made of the various methods and their suitability to deepwater applications. A novel method for installation of deepwater TLP`s is described. This method of installation is most suitable for deepwater and/or large TLP structures, but can also be used in moderate water depth as well. The tendon installation method utilizes the so-called Platform Arrestor Concept (PAC), wherein tendon sections are transported by barges to site, and assembled vertically using a dynamically position crane vessel. The tendons are transferred to the platform where they are hung off until there are a full complement of tendons. The hull lock off operation is performed on all tendons simultaneously, avoiding dangerous platform resonant behavior. The installation calls for relatively simple installation equipment, and also enables the use of simple tendon tie-off equipment, such as a single piece nut.

  1. Advancements in Afterbody Radiative Heating Simulations for Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Panesi, Marco; Brandis, Aaron M.

    2016-01-01

    Four advancements to the simulation of backshell radiative heating for Earth entry are presented. The first of these is the development of a flow field model that treats electronic levels of the dominant backshell radiator, N, as individual species. This is shown to allow improvements in the modeling of electron-ion recombination and two-temperature modeling, which are shown to increase backshell radiative heating by 10 to 40%. By computing the electronic state populations of N within the flow field solver, instead of through the quasi-steady state approximation in the radiation code, the coupling of radiative transition rates to the species continuity equations for the levels of N, including the impact of non-local absorption, becomes feasible. Implementation of this additional level of coupling between the flow field and radiation codes represents the second advancement presented in this work, which is shown to increase the backshell radiation by another 10 to 50%. The impact of radiative transition rates due to non-local absorption indicates the importance of accurate radiation transport in the relatively complex flow geometry of the backshell. This motivates the third advancement, which is the development of a ray-tracing radiation transport approach to compute the radiative transition rates and divergence of the radiative flux at every point for coupling to the flow field, therefore allowing the accuracy of the commonly applied tangent-slab approximation to be assessed for radiative source terms. For the sphere considered at lunar-return conditions, the tangent-slab approximation is shown to provide a sufficient level of accuracy for the radiative source terms, even for backshell cases. This is in contrast to the agreement between the two approaches for computing the radiative flux to the surface, which differ by up to 40%. The final advancement presented is the development of a nonequilibrium model for NO radiation, which provides significant backshell

  2. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  3. Advanced statistical methods for the definition of new staging models.

    PubMed

    Kates, Ronald; Schmitt, Manfred; Harbeck, Nadia

    2003-01-01

    Adequate staging procedures are the prerequisite for individualized therapy concepts in cancer, particularly in the adjuvant setting. Molecular staging markers tend to characterize specific, fundamental disease processes to a greater extent than conventional staging markers. At the biological level, the course of the disease will almost certainly involve interactions between multiple underlying processes. Since new therapeutic strategies tend to target specific processes as well, their impact will also involve interactions. Hence, assessment of the prognostic impact of new markers and their utilization for prediction of response to therapy will require increasingly sophisticated statistical tools that are capable of detecting and modeling complicated interactions. Because they are designed to model arbitrary interactions, neural networks offer a promising approach to improved staging. However, the typical clinical data environment poses severe challenges to high-performance survival modeling using neural nets, particularly the key problem of maintaining good generalization. Nonetheless, it turns out that by using newly developed methods to minimize unnecessary complexity in the neural network representation of disease course, it is possible to obtain models with high predictive performance. This performance has been validated on both simulated and real patient data sets. There are important applications for design of studies involving targeted therapy concepts and for identification of the improvement in decision support resulting from new staging markers. In this article, advantages of advanced statistical methods such as neural networks for definition of new staging models will be illustrated using breast cancer as an example.

  4. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  5. Advanced method for making vitreous waste forms

    SciTech Connect

    Pope, J.M.; Harrison, D.E.

    1980-01-01

    A process is described for making waste glass that circumvents the problems of dissolving nuclear waste in molten glass at high temperatures. Because the reactive mixing process is independent of the inherent viscosity of the melt, any glass composition can be prepared with equal facility. Separation of the mixing and melting operations permits novel glass fabrication methods to be employed.

  6. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON.

    PubMed

    Lytton, William W; Seidenstein, Alexandra H; Dura-Bernal, Salvador; McDougal, Robert A; Schürmann, Felix; Hines, Michael L

    2016-10-01

    Large multiscale neuronal network simulations are of increasing value as more big data are gathered about brain wiring and organization under the auspices of a current major research initiative, such as Brain Research through Advancing Innovative Neurotechnologies. The development of these models requires new simulation technologies. We describe here the current use of the NEURON simulator with message passing interface (MPI) for simulation in the domain of moderately large networks on commonly available high-performance computers (HPCs). We discuss the basic layout of such simulations, including the methods of simulation setup, the run-time spike-passing paradigm, and postsimulation data storage and data management approaches. Using the Neuroscience Gateway, a portal for computational neuroscience that provides access to large HPCs, we benchmark simulations of neuronal networks of different sizes (500-100,000 cells), and using different numbers of nodes (1-256). We compare three types of networks, composed of either Izhikevich integrate-and-fire neurons (I&F), single-compartment Hodgkin-Huxley (HH) cells, or a hybrid network with half of each. Results show simulation run time increased approximately linearly with network size and decreased almost linearly with the number of nodes. Networks with I&F neurons were faster than HH networks, although differences were small since all tested cells were point neurons with a single compartment. PMID:27557104

  7. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON.

    PubMed

    Lytton, William W; Seidenstein, Alexandra H; Dura-Bernal, Salvador; McDougal, Robert A; Schürmann, Felix; Hines, Michael L

    2016-10-01

    Large multiscale neuronal network simulations are of increasing value as more big data are gathered about brain wiring and organization under the auspices of a current major research initiative, such as Brain Research through Advancing Innovative Neurotechnologies. The development of these models requires new simulation technologies. We describe here the current use of the NEURON simulator with message passing interface (MPI) for simulation in the domain of moderately large networks on commonly available high-performance computers (HPCs). We discuss the basic layout of such simulations, including the methods of simulation setup, the run-time spike-passing paradigm, and postsimulation data storage and data management approaches. Using the Neuroscience Gateway, a portal for computational neuroscience that provides access to large HPCs, we benchmark simulations of neuronal networks of different sizes (500-100,000 cells), and using different numbers of nodes (1-256). We compare three types of networks, composed of either Izhikevich integrate-and-fire neurons (I&F), single-compartment Hodgkin-Huxley (HH) cells, or a hybrid network with half of each. Results show simulation run time increased approximately linearly with network size and decreased almost linearly with the number of nodes. Networks with I&F neurons were faster than HH networks, although differences were small since all tested cells were point neurons with a single compartment.

  8. Mesoscopic Simulation Methods for Polymer Dynamics

    NASA Astrophysics Data System (ADS)

    Larson, Ronald

    2015-03-01

    We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.

  9. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  10. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  11. Analytical and numerical methods; advanced computer concepts

    SciTech Connect

    Lax, P D

    1991-03-01

    This past year, two projects have been completed and a new is under way. First, in joint work with R. Kohn, we developed a numerical algorithm to study the blowup of solutions to equations with certain similarity transformations. In the second project, the adaptive mesh refinement code of Berger and Colella for shock hydrodynamic calculations has been parallelized and numerical studies using two different shared memory machines have been done. My current effort is towards the development of Cartesian mesh methods to solve pdes with complicated geometries. Most of the coming year will be spent on this project, which is joint work with Prof. Randy Leveque at the University of Washington in Seattle.

  12. Advances in organometallic synthesis with mechanochemical methods.

    PubMed

    Rightmire, Nicholas R; Hanusa, Timothy P

    2016-02-14

    Solvent-based syntheses have long been normative in all areas of chemistry, although mechanochemical methods (specifically grinding and milling) have been used to good effect for decades in organic, and to a lesser but growing extent, inorganic coordination chemistry. Organometallic synthesis, in contrast, represents a relatively underdeveloped area for mechanochemical research, and the potential benefits are considerable. From access to new classes of unsolvated complexes, to control over stoichiometries that have not been observed in solution routes, mechanochemical (or 'M-chem') approaches have much to offer the synthetic chemist. It has already become clear that removing the solvent from an organometallic reaction can change reaction pathways considerably, so that prediction of the outcome is not always straightforward. This Perspective reviews recent developments in the field, and describes equipment that can be used in organometallic synthesis. Synthetic chemists are encouraged to add mechanochemical methods to their repertoire in the search for new and highly reactive metal complexes and novel types of organometallic transformations.

  13. Advancements in Research Synthesis Methods: From a Methodologically Inclusive Perspective

    ERIC Educational Resources Information Center

    Suri, Harsh; Clarke, David

    2009-01-01

    The dominant literature on research synthesis methods has positivist and neo-positivist origins. In recent years, the landscape of research synthesis methods has changed rapidly to become inclusive. This article highlights methodologically inclusive advancements in research synthesis methods. Attention is drawn to insights from interpretive,…

  14. Current methods and advances in bone densitometry

    NASA Technical Reports Server (NTRS)

    Guglielmi, G.; Gluer, C. C.; Majumdar, S.; Blunt, B. A.; Genant, H. K.

    1995-01-01

    Bone mass is the primary, although not the only, determinant of fracture. Over the past few years a number of noninvasive techniques have been developed to more sensitively quantitate bone mass. These include single and dual photon absorptiometry (SPA and DPA), single and dual X-ray absorptiometry (SXA and DXA) and quantitative computed tomography (QCT). While differing in anatomic sites measured and in their estimates of precision, accuracy, and fracture discrimination, all of these methods provide clinically useful measurements of skeletal status. It is the intent of this review to discuss the pros and cons of these techniques and to present the new applications of ultrasound (US) and magnetic resonance (MRI) in the detection and management of osteoporosis.

  15. Parallel node placement method by bubble simulation

    NASA Astrophysics Data System (ADS)

    Nie, Yufeng; Zhang, Weiwei; Qi, Nan; Li, Yiqiang

    2014-03-01

    An efficient Parallel Node Placement method by Bubble Simulation (PNPBS), employing METIS-based domain decomposition (DD) for an arbitrary number of processors is introduced. In accordance with the desired nodal density and Newton’s Second Law of Motion, automatic generation of node sets by bubble simulation has been demonstrated in previous work. Since the interaction force between nodes is short-range, for two distant nodes, their positions and velocities can be updated simultaneously and independently during dynamic simulation, which indicates the inherent property of parallelism, it is quite suitable for parallel computing. In this PNPBS method, the METIS-based DD scheme has been investigated for uniform and non-uniform node sets, and dynamic load balancing is obtained by evenly distributing work among the processors. For the nodes near the common interface of two neighboring subdomains, there is no need for special treatment after dynamic simulation. These nodes have good geometrical properties and a smooth density distribution which is desirable in the numerical solution of partial differential equations (PDEs). The results of numerical examples show that quasi linear speedup in the number of processors and high efficiency are achieved.

  16. Method and ethics in advancing jury research.

    PubMed

    Robertshaw, P

    1998-10-01

    In this article the contemporary problems of the jury and jury research are considered. This is timely, in view of the current Home Office Consultation Paper on the future of, and alternatives to, the jury in serious fraud trials, to which the author has submitted representations on its jury aspects. The research position is dominated by the prohibitions in the Contempt of Court Act 1981. The types of indirect research on jury deliberation which have been achieved within this stricture are outlined. In the USA, direct research of the jury is possible but, for historical reasons, it has been in television documentaries that direct observation of the deliberation process has been achieved. The first issue is discussed and the problems of inauthenticity, 'the observer effect', and of existential invalidity in 'mock' or 'shadow' juries are noted. Finally, the kinds of issues that could be addressed if licensed jury deliberation research was legalized, are proposed. It is also suggested that there are methods available to transcend the problems associated with American direct research. PMID:9808945

  17. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1994-01-01

    NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.

  18. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Choi, Jachoon; El-Sharawy, El-Budawy; Hashemi-Yeganeh, Shahrokh; Birtcher, Craig R.

    1990-01-01

    High- and low-frequency methods to analyze various radiation elements located on aerospace vehicles with combinations of conducting, nonconducting, and energy absorbing surfaces and interfaces. The focus was on developing fundamental concepts, techniques, and algorithms which would remove some of the present limitations in predicting radiation characteristics of antennas on complex aerospace vehicles. In order to accomplish this, the following subjects were examined: (1) the development of techniques for rigorous analysis of surface discontinuities of metallic and nonmetallic surfaces using the equivalent surface impedance concept and Green's function; (2) the effects of anisotropic material on antenna radiation patterns through the use of an equivalent surface impedance concept which is incorporated into the existing numerical electromagnetics computer codes; and (3) the fundamental concepts of precipitation static (P-Static), such as formulations and analytical models. A computer code was used to model the P-Static process on a simple structure. Measurement techniques were also developed to characterized the electrical properties at microwave frequencies. Samples of typical materials used in airframes were tested and the results are included.

  19. Multigrid methods with applications to reservoir simulation

    SciTech Connect

    Xiao, Shengyou

    1994-05-01

    Multigrid methods are studied for solving elliptic partial differential equations. Focus is on parallel multigrid methods and their use for reservoir simulation. Multicolor Fourier analysis is used to analyze the behavior of standard multigrid methods for problems in one and two dimensions. Relation between multicolor and standard Fourier analysis is established. Multiple coarse grid methods for solving model problems in 1 and 2 dimensions are considered; at each coarse grid level we use more than one coarse grid to improve convergence. For a given Dirichlet problem, a related extended problem is first constructed; a purification procedure can be used to obtain Moore-Penrose solutions of the singular systems encountered. For solving anisotropic equations, semicoarsening and line smoothing techniques are used with multiple coarse grid methods to improve convergence. Two-level convergence factors are estimated using multicolor. In the case where each operator has the same stencil on each grid point on one level, exact multilevel convergence factors can be obtained. For solving partial differential equations with discontinuous coefficients, interpolation and restriction operators should include information about the equation coefficients. Matrix-dependent interpolation and restriction operators based on the Schur complement can be used in nonsymmetric cases. A semicoarsening multigrid solver with these operators is used in UTCOMP, a 3-D, multiphase, multicomponent, compositional reservoir simulator. The numerical experiments are carried out on different computing systems. Results indicate that the multigrid methods are promising.

  20. Twitter's tweet method modelling and simulation

    NASA Astrophysics Data System (ADS)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  1. Overview of the Consortium for the Advanced Simulation of Light Water Reactors (CASL)

    NASA Astrophysics Data System (ADS)

    Kulesza, Joel A.; Franceschini, Fausto; Evans, Thomas M.; Gehin, Jess C.

    2016-02-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) was established in July 2010 for the purpose of providing advanced modeling and simulation solutions for commercial nuclear reactors. The primary goal is to provide coupled, higher-fidelity, usable modeling and simulation capabilities than are currently available. These are needed to address light water reactor (LWR) operational and safety performance-defining phenomena that are not yet able to be fully modeled taking a first-principles approach. In order to pursue these goals, CASL has participation from laboratory, academic, and industry partners. These partners are pursuing the solution of ten major "Challenge Problems" in order to advance the state-of-the-art in reactor design and analysis to permit power uprates, higher burnup, life extension, and increased safety. At present, the problems being addressed by CASL are primarily reactor physics-oriented; however, this paper is intended to introduce CASL to the reactor dosimetry community because of the importance of reactor physics modelling and nuclear data to define the source term for that community and the applicability and extensibility of the transport methods being developed.

  2. Recent advances in the simulation of particle-laden flows

    NASA Astrophysics Data System (ADS)

    Harting, J.; Frijters, S.; Ramaioli, M.; Robinson, M.; Wolf, D. E.; Luding, S.

    2014-10-01

    A substantial number of algorithms exists for the simulation of moving particles suspended in fluids. However, finding the best method to address a particular physical problem is often highly non-trivial and depends on the properties of the particles and the involved fluid(s) together. In this report, we provide a short overview on a number of existing simulation methods and provide two state of the art examples in more detail. In both cases, the particles are described using a Discrete Element Method (DEM). The DEM solver is usually coupled to a fluid-solver, which can be classified as grid-based or mesh-free (one example for each is given). Fluid solvers feature different resolutions relative to the particle size and separation. First, a multicomponent lattice Boltzmann algorithm (mesh-based and with rather fine resolution) is presented to study the behavior of particle stabilized fluid interfaces and second, a Smoothed Particle Hydrodynamics implementation (mesh-free, meso-scale resolution, similar to the particle size) is introduced to highlight a new player in the field, which is expected to be particularly suited for flows including free surfaces.

  3. An Advanced Leakage Scheme for Neutrino Treatment in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Perego, A.; Cabezón, R. M.; Käppeli, R.

    2016-04-01

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.

  4. Advanced flight deck/crew station simulator functional requirements

    NASA Technical Reports Server (NTRS)

    Wall, R. L.; Tate, J. L.; Moss, M. J.

    1980-01-01

    This report documents a study of flight deck/crew system research facility requirements for investigating issues involved with developing systems, and procedures for interfacing transport aircraft with air traffic control systems planned for 1985 to 2000. Crew system needs of NASA, the U.S. Air Force, and industry were investigated and reported. A matrix of these is included, as are recommended functional requirements and design criteria for simulation facilities in which to conduct this research. Methods of exploiting the commonality and similarity in facilities are identified, and plans for exploiting this in order to reduce implementation costs and allow efficient transfer of experiments from one facility to another are presented.

  5. [Objective surgery -- advanced robotic devices and simulators used for surgical skill assessment].

    PubMed

    Suhánszki, Norbert; Haidegger, Tamás

    2014-12-01

    Robotic assistance became a leading trend in minimally invasive surgery, which is based on the global success of laparoscopic surgery. Manual laparoscopy requires advanced skills and capabilities, which is acquired through tedious learning procedure, while da Vinci type surgical systems offer intuitive control and advanced ergonomics. Nevertheless, in either case, the key issue is to be able to assess objectively the surgeons' skills and capabilities. Robotic devices offer radically new way to collect data during surgical procedures, opening the space for new ways of skill parameterization. This may be revolutionary in MIS training, given the new and objective surgical curriculum and examination methods. The article reviews currently developed skill assessment techniques for robotic surgery and simulators, thoroughly inspecting their validation procedure and utility. In the coming years, these methods will become the mainstream of Western surgical education.

  6. Large eddy simulation of unsteady wind farm behavior using advanced actuator disk models

    NASA Astrophysics Data System (ADS)

    Moens, Maud; Duponcheel, Matthieu; Winckelmans, Gregoire; Chatelain, Philippe

    2014-11-01

    The present project aims at improving the level of fidelity of unsteady wind farm scale simulations through an effort on the representation and the modeling of the rotors. The chosen tool for the simulations is a Fourth Order Finite Difference code, developed at Universite catholique de Louvain; this solver implements Large Eddy Simulation (LES) approaches. The wind turbines are modeled as advanced actuator disks: these disks are coupled with the Blade Element Momentum method (BEM method) and also take into account the turbine dynamics and controller. A special effort is made here to reproduce the specific wake behaviors. Wake decay and expansion are indeed initially governed by vortex instabilities. This is an information that cannot be obtained from the BEM calculations. We thus aim at achieving this by matching the large scales of the actuator disk flow to high fidelity wake simulations produced using a Vortex Particle-Mesh method. It is obtained by adding a controlled excitation at the disk. We apply this tool to the investigation of atmospheric turbulence effects on the power production and on the wake behavior at a wind farm level. A turbulent velocity field is then used as inflow boundary condition for the simulations. We gratefully acknowledge the support of GDF Suez for the fellowship of Mrs Maud Moens.

  7. Simulation of an advanced techniques of ion propulsion Rocket system

    NASA Astrophysics Data System (ADS)

    Bakkiyaraj, R.

    2016-07-01

    The ion propulsion rocket system is expected to become popular with the development of Deuterium,Argon gas and Hexagonal shape Magneto hydrodynamic(MHD) techniques because of the stimulation indirectly generated the power from ionization chamber,design of thrust range is 1.2 N with 40 KW of electric power and high efficiency.The proposed work is the study of MHD power generation through ionization level of Deuterium gas and combination of two gaseous ions(Deuterium gas ions + Argon gas ions) at acceleration stage.IPR consists of three parts 1.Hexagonal shape MHD based power generator through ionization chamber 2.ion accelerator 3.Exhaust of Nozzle.Initially the required energy around 1312 KJ/mol is carrying out the purpose of deuterium gas which is changed to ionization level.The ionized Deuterium gas comes out from RF ionization chamber to nozzle through MHD generator with enhanced velocity then after voltage is generated across the two pairs of electrode in MHD.it will produce thrust value with the help of mixing of Deuterium ion and Argon ion at acceleration position.The simulation of the IPR system has been carried out by MATLAB.By comparing the simulation results with the theoretical and previous results,if reaches that the proposed method is achieved of thrust value with 40KW power for simulating the IPR system.

  8. Advanced Ablative Insulators and Methods of Making Them

    NASA Technical Reports Server (NTRS)

    Congdon, William M.

    2005-01-01

    Advanced ablative (more specifically, charring) materials that provide temporary protection against high temperatures, and advanced methods of designing and manufacturing insulators based on these materials, are undergoing development. These materials and methods were conceived in an effort to replace the traditional thermal-protection systems (TPSs) of re-entry spacecraft with robust, lightweight, better-performing TPSs that can be designed and manufactured more rapidly and at lower cost. These materials and methods could also be used to make improved TPSs for general aerospace, military, and industrial applications.

  9. Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method

    NASA Astrophysics Data System (ADS)

    Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han

    2015-12-01

    Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.

  10. Strategy to Promote Active Learning of an Advanced Research Method

    ERIC Educational Resources Information Center

    McDermott, Hilary J.; Dovey, Terence M.

    2013-01-01

    Research methods courses aim to equip students with the knowledge and skills required for research yet seldom include practical aspects of assessment. This reflective practitioner report describes and evaluates an innovative approach to teaching and assessing advanced qualitative research methods to final-year psychology undergraduate students. An…

  11. Angioplasty simulation using ChainMail method

    NASA Astrophysics Data System (ADS)

    Le Fol, Tanguy; Acosta-Tamayo, Oscar; Lucas, Antoine; Haigron, Pascal

    2007-03-01

    Tackling transluminal angioplasty planning, the aim of our work is to bring, in a patient specific way, solutions to clinical problems. This work focuses on realization of simple simulation scenarios taking into account macroscopic behaviors of stenosis. It means simulating geometrical and physical data from the inflation of a balloon while integrating data from tissues analysis and parameters from virtual tool-tissues interactions. In this context, three main behaviors has been identified: soft tissues crush completely under the effect of the balloon, calcified plaques, do not admit any deformation but could move in deformable structures, the blood vessel wall undergoes consequences from compression phenomenon and tries to find its original form. We investigated the use of Chain-Mail which is based on elements linked with the others thanks to geometric constraints. Compared with time consuming methods or low realism ones, Chain-Mail methods provide a good compromise between physical and geometrical approaches. In this study, constraints are defined from pixel density from angio-CT images. The 2D method, proposed in this paper, first initializes the balloon in the blood vessel lumen. Then the balloon inflates and the moving propagation, gives an approximate reaction of tissues. Finally, a minimal energy level is calculated to locally adjust element positions, throughout elastic relaxation stage. Preliminary experimental results obtained on 2D computed tomography (CT) images (100x100 pixels) show that the method is fast enough to handle a great number of linked-element. The simulation is able to verify real-time and realistic interactions, particularly for hard and soft plaques.

  12. Advanced Simulation Capability for Environmental Management (ASCEM) Phase II Demonstration

    SciTech Connect

    Freshley, M.; Hubbard, S.; Flach, G.; Freedman, V.; Agarwal, D.; Andre, B.; Bott, Y.; Chen, X.; Davis, J.; Faybishenko, B.; Gorton, I.; Murray, C.; Moulton, D.; Meyer, J.; Rockhold, M.; Shoshani, A.; Steefel, C.; Wainwright, H.; Waichler, S.

    2012-09-28

    In 2009, the National Academies of Science (NAS) reviewed and validated the U.S. Department of Energy Office of Environmental Management (EM) Technology Program in its publication, Advice on the Department of Energy’s Cleanup Technology Roadmap: Gaps and Bridges. The NAS report outlined prioritization needs for the Groundwater and Soil Remediation Roadmap, concluded that contaminant behavior in the subsurface is poorly understood, and recommended further research in this area as a high priority. To address this NAS concern, the EM Office of Site Restoration began supporting the development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific approach that uses an integration of toolsets for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM modeling toolset is modular and open source. It is divided into three thrust areas: Multi-Process High Performance Computing (HPC), Platform and Integrated Toolsets, and Site Applications. The ASCEM toolsets will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. During fiscal year 2012, the ASCEM project continued to make significant progress in capabilities development. Capability development occurred in both the Platform and Integrated Toolsets and Multi-Process HPC Simulator areas. The new Platform and Integrated Toolsets capabilities provide the user an interface and the tools necessary for end-to-end model development that includes conceptual model definition, data management for model input, model calibration and uncertainty analysis, and model output processing including visualization. The new HPC Simulator capabilities target increased functionality of process model representations, toolsets for interaction with the Platform, and model confidence testing and verification for

  13. A Primer In Advanced Fatigue Life Prediction Methods

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.

    2000-01-01

    Metal fatigue has plagued structural components for centuries, and it remains a critical durability issue in today's aerospace hardware. This is true despite vastly improved and advanced materials, increased mechanistic understanding, and development of accurate structural analysis and advanced fatigue life prediction tools. Each advance is quickly taken advantage of to produce safer, more reliable more cost effective, and better performing products. In other words, as the envelop is expanded, components are then designed to operate just as close to the newly expanded envelop as they were to the initial one. The problem is perennial. The economic importance of addressing structural durability issues early in the design process is emphasized. Tradeoffs with performance, cost, and legislated restrictions are pointed out. Several aspects of structural durability of advanced systems, advanced materials and advanced fatigue life prediction methods are presented. Specific items include the basic elements of durability analysis, conventional designs, barriers to be overcome for advanced systems, high-temperature life prediction for both creep-fatigue and thermomechanical fatigue, mean stress effects, multiaxial stress-strain states, and cumulative fatigue damage accumulation assessment.

  14. A discrete event method for wave simulation

    SciTech Connect

    Nutaro, James J

    2006-01-01

    This article describes a discrete event interpretation of the finite difference time domain (FDTD) and digital wave guide network (DWN) wave simulation schemes. The discrete event method is formalized using the discrete event system specification (DEVS). The scheme is shown to have errors that are proportional to the resolution of the spatial grid. A numerical example demonstrates the relative efficiency of the scheme with respect to FDTD and DWN schemes. The potential for the discrete event scheme to reduce numerical dispersion and attenuation errors is discussed.

  15. Pantograph catenary dynamic optimisation based on advanced multibody and finite element co-simulation tools

    NASA Astrophysics Data System (ADS)

    Massat, Jean-Pierre; Laurent, Christophe; Bianchi, Jean-Philippe; Balmès, Etienne

    2014-05-01

    This paper presents recent developments undertaken by SNCF Innovation & Research Department on numerical modelling of pantograph catenary interaction. It aims at describing an efficient co-simulation process between finite element (FE) and multibody (MB) modelling methods. FE catenary models are coupled with a full flexible MB representation with pneumatic actuation of pantograph. These advanced functionalities allow new kind of numerical analyses such as dynamic improvements based on innovative pneumatic suspensions or assessment of crash risks crossing areas that demonstrate the powerful capabilities of this computing approach.

  16. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    USGS Publications Warehouse

    Daniel Buscombe,; Rubin, David M.

    2012-01-01

    1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  17. Advanced stress analysis methods applicable to turbine engine structures

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.

    1985-01-01

    Advanced stress analysis methods applicable to turbine engine structures are investigated. Constructions of special elements which containing traction-free circular boundaries are investigated. New versions of mixed variational principle and version of hybrid stress elements are formulated. A method is established for suppression of kinematic deformation modes. semiLoof plate and shell elements are constructed by assumed stress hybrid method. An elastic-plastic analysis is conducted by viscoplasticity theory using the mechanical subelement model.

  18. Advanced surface paneling method for subsonic and supersonic flow

    NASA Technical Reports Server (NTRS)

    Erickson, L. L.; Johnson, F. T.; Ehlers, F. E.

    1976-01-01

    Numerical results illustrating the capabilities of an advanced aerodynamic surface paneling method are presented. The method is applicable to both subsonic and supersonic flow, as represented by linearized potential flow theory. The method is based on linearly varying sources and quadratically varying doublets which are distributed over flat or curved panels. These panels are applied to the true surface geometry of arbitrarily shaped three dimensional aerodynamic configurations.

  19. Apparatus for and method of simulating turbulence

    DOEpatents

    Dimas, Athanassios; Lottati, Isaac; Bernard, Peter; Collins, James; Geiger, James C.

    2003-01-01

    In accordance with a preferred embodiment of the invention, a novel apparatus for and method of simulating physical processes such as fluid flow is provided. Fluid flow near a boundary or wall of an object is represented by a collection of vortex sheet layers. The layers are composed of a grid or mesh of one or more geometrically shaped space filling elements. In the preferred embodiment, the space filling elements take on a triangular shape. An Eulerian approach is employed for the vortex sheets, where a finite-volume scheme is used on the prismatic grid formed by the vortex sheet layers. A Lagrangian approach is employed for the vortical elements (e.g., vortex tubes or filaments) found in the remainder of the flow domain. To reduce the computational time, a hairpin removal scheme is employed to reduce the number of vortex filaments, and a Fast Multipole Method (FMM), preferably implemented using parallel processing techniques, reduces the computation of the velocity field.

  20. Advanced digital methods for solid propellant burning rate determination

    NASA Astrophysics Data System (ADS)

    Jones, Daniel A.

    The work presented here is a study of a digital method for determining the combustion bomb burning rate of a fuel-rich gas generator propellant sample using the ultrasonic pulse-echo technique. The advanced digital method, which places user defined limits on the search for the ultrasonic echo from the burning surface, is computationally faster than the previous cross correlation method, and is able to analyze data for this class of propellant that the previous cross correlation data reduction method could not. For the conditions investigated, the best fit burning rate law at 800 psi from the ultrasonic technique and advanced cross correlation method is within 3 percent of an independent analysis of the same data, and is within 5 percent of the best fit burning rate law found from parallel research of the same propellant in a motor configuration.

  1. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    SciTech Connect

    E. Blanford; E. Keldrauk; M. Laufer; M. Mieler; J. Wei; B. Stojadinovic; P.F. Peterson

    2010-09-20

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  2. Recent advances in optical measurement methods in physics and chemistry

    SciTech Connect

    Gerardo, J.B.

    1985-01-01

    Progress being made in the development of new scientific measurement tools based on optics and the scientific advances made possible by these new tools is impressive. In some instances, new optical-based measurement methods have made new scientific studies possible, while in other instances they have offered an improved method for performing these studies, e.g., better signal-to-noise ratio, increased data acquisition rate, remote analysis, reduced perturbation to the physical or chemical system being studied, etc. Many of these advances were made possible by advances in laser technology - spectral purity, spectral brightness, tunability, ultrashort pulse width, amplitude stability, etc. - while others were made possible by improved optical components - single-made fibers, modulators, detectors, wavelength multiplexes, etc. Attention is limited to just a few of many such accomplishments made recently at Sandia. 17 references, 16 figures.

  3. Characterization and Simulation of the Thermoacoustic Instability Behavior of an Advanced, Low Emissions Combustor Prototype

    NASA Technical Reports Server (NTRS)

    DeLaat, John C.; Paxson, Daniel E.

    2008-01-01

    Extensive research is being done toward the development of ultra-low-emissions combustors for aircraft gas turbine engines. However, these combustors have an increased susceptibility to thermoacoustic instabilities. This type of instability was recently observed in an advanced, low emissions combustor prototype installed in a NASA Glenn Research Center test stand. The instability produces pressure oscillations that grow with increasing fuel/air ratio, preventing full power operation. The instability behavior makes the combustor a potentially useful test bed for research into active control methods for combustion instability suppression. The instability behavior was characterized by operating the combustor at various pressures, temperatures, and fuel and air flows representative of operation within an aircraft gas turbine engine. Trends in instability behavior versus operating condition have been identified and documented, and possible explanations for the trends provided. A simulation developed at NASA Glenn captures the observed instability behavior. The physics-based simulation includes the relevant physical features of the combustor and test rig, employs a Sectored 1-D approach, includes simplified reaction equations, and provides time-accurate results. A computationally efficient method is used for area transitions, which decreases run times and allows the simulation to be used for parametric studies, including control method investigations. Simulation results show that the simulation exhibits a self-starting, self-sustained combustion instability and also replicates the experimentally observed instability trends versus operating condition. Future plans are to use the simulation to investigate active control strategies to suppress combustion instabilities and then to experimentally demonstrate active instability suppression with the low emissions combustor prototype, enabling full power, stable operation.

  4. Advancement of DOE's EnergyPlus Building Energy Simulation Payment

    SciTech Connect

    Gu, Lixing; Shirey, Don; Raustad, Richard; Nigusse, Bereket; Sharma, Chandan; Lawrie, Linda; Strand, Rick; Pedersen, Curt; Fisher, Dan; Lee, Edwin; Witte, Mike; Glazer, Jason; Barnaby, Chip

    2011-09-30

    EnergyPlus{sup TM} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D (http://www.energyplus.gov). It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. The 5-year project was managed by the National Energy Technology Laboratory and was divided into 5 budget period between 2006 and 2011. During the project period, 11 versions of EnergyPlus were released. This report summarizes work performed by an EnergyPlus development team led by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC). The team members consist of DHL Consulting, C. O. Pedersen Associates, University of Illinois at Urbana-Champaign, Oklahoma State University, GARD Analytics, Inc., and WrightSoft Corporation. The project tasks involved new feature development, testing and validation, user support and training, and general EnergyPlus support. The team developed 146 new features during the 5-year period to advance the EnergyPlus capabilities. Annual contributions of new features are 7 in budget period 1, 19 in period 2, 36 in period 3, 41 in period 4, and 43 in period 5, respectively. The testing and validation task focused on running test suite and publishing report, developing new IEA test suite cases, testing and validating new source code, addressing change requests, and creating and testing installation package. The user support and training task provided support for users and interface developers, and organized and taught workshops. The general support task involved upgrading StarTeam (team sharing) software and updating existing utility software. The project met the DOE objectives and completed all tasks successfully. Although the EnergyPlus software was enhanced significantly

  5. Simulation for Supporting Scale-Up of a Fluidized Bed Reactor for Advanced Water Oxidation

    PubMed Central

    Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    Simulation of fluidized bed reactor (FBR) was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP). The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure) in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe3+ and Fe2+ mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40–90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment. PMID:25309949

  6. Advanced Simulation in Undergraduate Pilot Training: Systems Integration. Final Report (February 1972-March 1975).

    ERIC Educational Resources Information Center

    Larson, D. F.; Terry, C.

    The Advanced Simulator for Undergraduate Pilot Training (ASUPT) was designed to investigate the role of simulation in the future Undergraduate Pilot Training (UPT) program. The problem addressed in this report was one of integrating two unlike components into one synchronized system. These two components were the Basic T-37 Simulators and their…

  7. 7 CFR 27.92 - Method of payment; advance deposit.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of payment; advance deposit. 27.92 Section 27.92 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD...

  8. METHODS ADVANCEMENT FOR MILK ANALYSIS: THE MAMA STUDY

    EPA Science Inventory

    The Methods Advancement for Milk Analysis (MAMA) study was designed by US EPA and CDC investigators to provide data to support the technological and study design needs of the proposed National Children=s Study (NCS). The NCS is a multi-Agency-sponsored study, authorized under the...

  9. Advanced Simulation of Coupled Earthquake and Tsunami Events (ASCETE) - Simulation Techniques for Realistic Tsunami Process Studies

    NASA Astrophysics Data System (ADS)

    Behrens, Joern; Bader, Michael; Breuer, Alexander N.; van Dinther, Ylona; Gabriel, Alice-A.; Galvez Barron, Percy E.; Rahnema, Kaveh; Vater, Stefan; Wollherr, Stephanie

    2015-04-01

    At the End of phase 1 of the ASCETE project a simulation framework for coupled physics-based rupture generation with tsunami propagation and inundation is available. Adaptive mesh tsunami propagation and inundation by discontinuous Galerkin Runge-Kutta methods allows for accurate and conservative inundation schemes. Combined with a tree-based refinement strategy to highly optimize the code for high-performance computing architectures, a modeling tool for high fidelity tsunami simulations has been constructed. Validation results demonstrate the capacity of the software. Rupture simulation is performed by an unstructured tetrahedral discontinuous Galerking ADER discretization, which allows for accurate representation of complex geometries. The implemented code was nominated for and was selected as a finalist for the Gordon Bell award in high-performance computing. Highly realistic rupture events can be simulated with this modeling tool. The coupling of rupture induced wave activity and displacement with hydrodynamic equations still poses a major problem due to diverging time and spatial scales. Some insight from the ASCETE set-up could be gained and the presentation will focus on the coupled behavior of the simulation system. Finally, an outlook to phase 2 of the ASCETE project will be given in which further development of detailed physical processes as well as near-realistic scenario computations are planned. ASCETE is funded by the Volkswagen Foundation.

  10. Development of an advanced actuator disk model for Large-Eddy Simulation of wind farms

    NASA Astrophysics Data System (ADS)

    Moens, Maud; Duponcheel, Matthieu; Winckelmans, Gregoire; Chatelain, Philippe

    2015-11-01

    This work aims at improving the fidelity of the wind turbine modelling for Large-Eddy Simulation (LES) of wind farms, in order to accurately predict the loads, the production, and the wake dynamics. In those simulations, the wind turbines are accounted for through actuator disks. i.e. a body-force term acting over the regularised disk swept by the rotor. These forces are computed using the Blade Element theory to estimate the normal and tangential components (based on the local simulated flow and the blade characteristics). The local velocities are modified using the Glauert tip-loss factor in order to account for the finite number of blades; the computation of this correction is here improved thanks to a local estimation of the effective upstream velocity at every point of the disk. These advanced actuator disks are implemented in a 4th order finite difference LES solver and are compared to a classical Blade Element Momentum method and to high fidelity wake simulations performed using a Vortex Particle-Mesh method in uniform and turbulent flows.

  11. Advanced propulsion for LEO-Moon transport. 1: A method for evaluating advanced propulsion performance

    NASA Technical Reports Server (NTRS)

    Stern, Martin O.

    1992-01-01

    This report describes a study to evaluate the benefits of advanced propulsion technologies for transporting materials between low Earth orbit and the Moon. A relatively conventional reference transportation system, and several other systems, each of which includes one advanced technology component, are compared in terms of how well they perform a chosen mission objective. The evaluation method is based on a pairwise life-cycle cost comparison of each of the advanced systems with the reference system. Somewhat novel and economically important features of the procedure are the inclusion not only of mass payback ratios based on Earth launch costs, but also of repair and capital acquisition costs, and of adjustments in the latter to reflect the technological maturity of the advanced technologies. The required input information is developed by panels of experts. The overall scope and approach of the study are presented in the introduction. The bulk of the paper describes the evaluation method; the reference system and an advanced transportation system, including a spinning tether in an eccentric Earth orbit, are used to illustrate it.

  12. Analytic Methods for Simulated Light Transport

    NASA Astrophysics Data System (ADS)

    Arvo, James Richard

    1995-01-01

    This thesis presents new mathematical and computational tools for the simulation of light transport in realistic image synthesis. New algorithms are presented for exact computation of direct illumination effects related to light emission, shadowing, and first-order scattering from surfaces. New theoretical results are presented for the analysis of global illumination algorithms, which account for all interreflections of light among surfaces of an environment. First, a closed-form expression is derived for the irradiance Jacobian, which is the derivative of a vector field representing radiant energy flux. The expression holds for diffuse polygonal scenes and correctly accounts for shadowing, or partial occlusion. Three applications of the irradiance Jacobian are demonstrated: locating local irradiance extrema, direct computation of isolux contours, and surface mesh generation. Next, the concept of irradiance is generalized to tensors of arbitrary order. A recurrence relation for irradiance tensors is derived that extends a widely used formula published by Lambert in 1760. Several formulas with applications in computer graphics are derived from this recurrence relation and are independently verified using a new Monte Carlo method for sampling spherical triangles. The formulas extend the range of non-diffuse effects that can be computed in closed form to include illumination from directional area light sources and reflections from and transmissions through glossy surfaces. Finally, new analysis for global illumination is presented, which includes both direct illumination and indirect illumination due to multiple interreflections of light. A novel operator equation is proposed that clarifies existing deterministic algorithms for simulating global illumination and facilitates error analysis. Basic properties of the operators and solutions are identified which are not evident from previous formulations. A taxonomy of errors that arise in simulating global illumination is

  13. Comparison of advanced distillation control methods. First annual report

    SciTech Connect

    1996-11-01

    A detailed dynamic simulator of a propylene/propane (C3) splitter, which was bench-marked against industrial data, has been used to compare dual composition control performance for a diagonal PI controller and several advanced controllers. The advanced controllers considered are DMC, nonlinear process model based control, and articial neutral networks. Each controller was tuned based upon setpoint changes in the overhead product composition using 50% changes in the impurity levels. Overall, there was not a great deal of difference in controller performance based upon the setpoint and disturbance tests. Periodic step changes in feed composition were also used to compare controller performance. In this case, oscillatory variations of the product composition were observed and the variabilities of the DMC and nonlinear process model based controllers were substantially smaller than that of the PI controller. The sensitivity of each controller to the frequency of the periodic step changes in feed composition was also investigated.

  14. Comparison of advanced distillation control methods. First annual report

    SciTech Connect

    Riggs, J.B.

    1996-11-01

    A detailed dynamic simulator of a propylene/propane (C{sub 3}) splitter, which was bench-marked against industrial data, has been used to compare dual composition control performance for a diagonal PI controller and several advanced controllers. The advanced controllers considered are dynamic matrix control (DMC), nonlinear process model based control, and artificial neutral networks. Each controller was tuned based upon setpoint changes in the overhead product composition using 50% changes in the impurity levels. Overall, there was not a great deal of difference in controller performance based upon the setpoint and disturbance tests. Periodic step changes in feed composition were also used to compare controller performance. In this case, oscillatory variations of the product composition were observed and the variabilities of the DC and nonlinear process model based controllers were substantially smaller than that of the PI controller. The sensitivity of each controller to the frequency of the periodic step changes in feed composition was also investigated.

  15. Etch Profile Simulation Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.

  16. Advances and future directions of research on spectral methods

    NASA Technical Reports Server (NTRS)

    Patera, A. T.

    1986-01-01

    Recent advances in spectral methods are briefly reviewed and characterized with respect to their convergence and computational complexity. Classical finite element and spectral approaches are then compared, and spectral element (or p-type finite element) approximations are introduced. The method is applied to the full Navier-Stokes equations, and examples are given of the application of the technique to several transitional flows. Future directions of research in the field are outlined.

  17. Current Advances in the Computational Simulation of the Formation of Low-Mass Stars

    SciTech Connect

    Klein, R I; Inutsuka, S; Padoan, P; Tomisaka, K

    2005-10-24

    Developing a theory of low-mass star formation ({approx} 0.1 to 3 M{sub {circle_dot}}) remains one of the most elusive and important goals of theoretical astrophysics. The star-formation process is the outcome of the complex dynamics of interstellar gas involving non-linear interactions of turbulence, gravity, magnetic field and radiation. The evolution of protostellar condensations, from the moment they are assembled by turbulent flows to the time they reach stellar densities, spans an enormous range of scales, resulting in a major computational challenge for simulations. Since the previous Protostars and Planets conference, dramatic advances in the development of new numerical algorithmic techniques have been successfully implemented on large scale parallel supercomputers. Among such techniques, Adaptive Mesh Refinement and Smooth Particle Hydrodynamics have provided frameworks to simulate the process of low-mass star formation with a very large dynamic range. It is now feasible to explore the turbulent fragmentation of molecular clouds and the gravitational collapse of cores into stars self-consistently within the same calculation. The increased sophistication of these powerful methods comes with substantial caveats associated with the use of the techniques and the interpretation of the numerical results. In this review, we examine what has been accomplished in the field and present a critique of both numerical methods and scientific results. We stress that computational simulations should obey the available observational constraints and demonstrate numerical convergence. Failing this, results of large scale simulations do not advance our understanding of low-mass star formation.

  18. Using Simulated Debates to Teach History of Engineering Advances

    ERIC Educational Resources Information Center

    Reynolds, Terry S.

    1976-01-01

    Described is a technique for utilizing debates of past engineering controversies in the classroom as a means of teaching the history of engineering advances. Included is a bibliography for three debate topics relating to important controversies. (SL)

  19. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  20. Numerical simulation of the boat growth method

    NASA Astrophysics Data System (ADS)

    Oda, K.; Saito, T.; Nishihama, J.; Ishihara, T.

    1989-09-01

    This paper presents a three-dimensional mathematical model for thermal convection in molten metals, which is applicable to the heat transfer phenomena in a boat-shaped crucibles. The governing equations are solved using an extended version, developed by Saito et al. (1986), of the Amsden and Harlow (1968) simplified marker and cell method. It is shown that the following parameters must be incorporated for an accurate simulation of melt growth: (1) the radiative heat transfer in the furnace, (2) the complex crucible configuration, (3) the melt flow, and (4) the solid-liquid interface shape. The velocity and temperature distribution calculated from this model are compared with the results of previous studies.

  1. Advances of vibrational spectroscopic methods in phytomics and bioanalysis.

    PubMed

    Huck, Christian W

    2014-01-01

    During the last couple of years great advances in vibrational spectroscopy including near-infrared (NIR), mid-infrared (MIR), attenuated total reflection (ATR) and imaging and also mapping techniques could be achieved. On the other hand spectral treatment features have improved dramatically allowing filtering out relevant information from spectral data much more efficiently and providing new insights into the biochemical composition. These advances offer new possible quality control strategies in phytomics and enable to get deeper insights into biochemical background in terms of medicinal relevant questions. It is the aim of the present article pointing out the technical and methodological advancements in the NIR and MIR field and to demonstrate the individual methods efficiency by discussing distinct selected applications.

  2. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  3. Advanced simulation and analysis of a geopotential research mission

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.

    1988-01-01

    Computer simulations have been performed for an orbital gradiometer mission to assist in the study of high degree and order gravity field recovery. The simulations were conducted for a satellite in near-circular, frozen orbit at a 160-km altitude using a gravitational field complete to degree and order 360. The mission duration is taken to be 32 days. The simulation provides a set of measurements to assist in the evaluation of techniques developed for the determination of the gravity field. Also, the simulation provides an ephemeris to study available tracking systems to satisfy the orbit determination requirements of the mission.

  4. Simulations of Failure via Three-Dimensional Cracking in Fuel Cladding for Advanced Nuclear Fuels

    SciTech Connect

    Lu, Hongbing; Bukkapatnam, Satish; Harimkar, Sandip; Singh, Raman; Bardenhagen, Scott

    2014-01-09

    Enhancing performance of fuel cladding and duct alloys is a key means of increasing fuel burnup. This project will address the failure of fuel cladding via three-dimensional cracking models. Researchers will develop a simulation code for the failure of the fuel cladding and validate the code through experiments. The objective is to develop an algorithm to determine the failure of fuel cladding in the form of three-dimensional cracking due to prolonged exposure under varying conditions of pressure, temperature, chemical environment, and irradiation. This project encompasses the following tasks: 1. Simulate 3D crack initiation and growth under instantaneous and/or fatigue loads using a new variant of the material point method (MPM); 2. Simulate debonding of the materials in the crack path using cohesive elements, considering normal and shear traction separation laws; 3. Determine the crack propagation path, considering damage of the materials incorporated in the cohesive elements to allow the energy release rate to be minimized; 4. Simulate the three-dimensional fatigue crack growth as a function of loading histories; 5. Verify the simulation code by comparing results to theoretical and numerical studies available in the literature; 6. Conduct experiments to observe the crack path and surface profile in unused fuel cladding and validate against simulation results; and 7. Expand the adaptive mesh refinement infrastructure parallel processing environment to allow adaptive mesh refinement at the 3D crack fronts and adaptive mesh merging in the wake of cracks. Fuel cladding is made of materials such as stainless steels and ferritic steels with added alloying elements, which increase stability and durability under irradiation. As fuel cladding is subjected to water, chemicals, fission gas, pressure, high temperatures, and irradiation while in service, understanding performance is essential. In the fast fuel used in advanced burner reactors, simulations of the nuclear

  5. Advances in nucleic acid-based detection methods.

    PubMed Central

    Wolcott, M J

    1992-01-01

    Laboratory techniques based on nucleic acid methods have increased in popularity over the last decade with clinical microbiologists and other laboratory scientists who are concerned with the diagnosis of infectious agents. This increase in popularity is a result primarily of advances made in nucleic acid amplification and detection techniques. Polymerase chain reaction, the original nucleic acid amplification technique, changed the way many people viewed and used nucleic acid techniques in clinical settings. After the potential of polymerase chain reaction became apparent, other methods of nucleic acid amplification and detection were developed. These alternative nucleic acid amplification methods may become serious contenders for application to routine laboratory analyses. This review presents some background information on nucleic acid analyses that might be used in clinical and anatomical laboratories and describes some recent advances in the amplification and detection of nucleic acids. PMID:1423216

  6. Fracture Toughness in Advanced Monolithic Ceramics - SEPB Versus SEVENB Methods

    NASA Technical Reports Server (NTRS)

    Choi, S. R.; Gyekenyesi, J. P.

    2005-01-01

    Fracture toughness of a total of 13 advanced monolithic ceramics including silicon nitrides, silicon carbide, aluminas, and glass ceramic was determined at ambient temperature by using both single edge precracked beam (SEPB) and single edge v-notched beam (SEVNB) methods. Relatively good agreement in fracture toughness between the two methods was observed for advanced ceramics with flat R-curves; whereas, poor agreement in fracture toughness was seen for materials with rising R-curves. The discrepancy in fracture toughness between the two methods was due to stable crack growth with crack closure forces acting in the wake region of cracks even in SEVNB test specimens. The effect of discrepancy in fracture toughness was analyzed in terms of microstructural feature (grain size and shape), toughening exponent, and stable crack growth determined using back-face strain gaging.

  7. Multi-physics nuclear reactor simulator for advanced nuclear engineering education

    SciTech Connect

    Yamamoto, A.

    2012-07-01

    Multi-physics nuclear reactor simulator, which aims to utilize for advanced nuclear engineering education, is being introduced to Nagoya Univ.. The simulator consists of the 'macroscopic' physics simulator and the 'microscopic' physics simulator. The former performs real time simulation of a whole nuclear power plant. The latter is responsible to more detail numerical simulations based on the sophisticated and precise numerical models, while taking into account the plant conditions obtained in the macroscopic physics simulator. Steady-state and kinetics core analyses, fuel mechanical analysis, fluid dynamics analysis, and sub-channel analysis can be carried out in the microscopic physics simulator. Simulation calculations are carried out through dedicated graphical user interface and the simulation results, i.e., spatial and temporal behaviors of major plant parameters are graphically shown. The simulator will provide a bridge between the 'theories' studied with textbooks and the 'physical behaviors' of actual nuclear power plants. (authors)

  8. Handbook of Scaling Methods in Aquatic Ecology: Measurement, Analysis, Simulation

    NASA Astrophysics Data System (ADS)

    Marrasé, Celia

    2004-03-01

    Researchers in aquatic sciences have long been interested in describing temporal and biological heterogeneities at different observation scales. During the 1970s, scaling studies received a boost from the application of spectral analysis to ecological sciences. Since then, new insights have evolved in parallel with advances in observation technologies and computing power. In particular, during the last 2 decades, novel theoretical achievements were facilitated by the use of microstructure profilers, the application of mathematical tools derived from fractal and wavelet analyses, and the increase in computing power that allowed more complex simulations. The idea of publishing the Handbook of Scaling Methods in Aquatic Ecology arose out of a special session of the 2001 Aquatic Science Meeting of the American Society of Limnology and Oceanography. The edition of the book is timely, because it compiles a good amount of the work done in these last 2 decades. The book is comprised of three sections: measurements, analysis, and simulation. Each contains some review chapters and a number of more specialized contributions. The contents are multidisciplinary and focus on biological and physical processes and their interactions over a broad range of scales, from micro-layers to ocean basins. The handbook topics include high-resolution observation methodologies, as well as applications of different mathematical tools for analysis and simulation of spatial structures, time variability of physical and biological processes, and individual organism behavior. The scientific background of the authors is highly diverse, ensuring broad interest for the scientific community.

  9. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  10. Advanced stress analysis methods applicable to turbine engine structures

    NASA Technical Reports Server (NTRS)

    Pian, Theodore H. H.

    1991-01-01

    The following tasks on the study of advanced stress analysis methods applicable to turbine engine structures are described: (1) constructions of special elements which contain traction-free circular boundaries; (2) formulation of new version of mixed variational principles and new version of hybrid stress elements; (3) establishment of methods for suppression of kinematic deformation modes; (4) construction of semiLoof plate and shell elements by assumed stress hybrid method; and (5) elastic-plastic analysis by viscoplasticity theory using the mechanical subelement model.

  11. Improved method of HIPOT testing of advanced ignition system product

    SciTech Connect

    Baker, P.C.

    1992-04-01

    A new method of high potential (HIPOT) testing of advanced ignition system (AIS) product was developed. The new incorporated using a silver-filled RTV silicone as the electrodes of the HIPOT tester instead of the preformed, semi-rigid aluminum electrodes of the current tester. Initial results indicate that the developed method was more sensitive to the testing requirements of the HIPOT test. A patent for the combination of the material used and the method of testing developed was attempted but was withdrawn following a patent search by the US Patent Office.

  12. Development of Kinetic Mechanisms for Next-Generation Fuels and CFD Simulation of Advanced Combustion Engines

    SciTech Connect

    Pitz, William J.; McNenly, Matt J.; Whitesides, Russell; Mehl, Marco; Killingsworth, Nick J.; Westbrook, Charles K.

    2015-12-17

    Predictive chemical kinetic models are needed to represent next-generation fuel components and their mixtures with conventional gasoline and diesel fuels. These kinetic models will allow the prediction of the effect of alternative fuel blends in CFD simulations of advanced spark-ignition and compression-ignition engines. Enabled by kinetic models, CFD simulations can be used to optimize fuel formulations for advanced combustion engines so that maximum engine efficiency, fossil fuel displacement goals, and low pollutant emission goals can be achieved.

  13. Advanced beam-dynamics simulation tools for RIA.

    SciTech Connect

    Garnett, R. W.; Wangler, T. P.; Billen, J. H.; Qiang, J.; Ryne, R.; Crandall, K. R.; Ostroumov, P.; York, R.; Zhao, Q.; Physics; LANL; LBNL; Tech Source; Michigan State Univ.

    2005-01-01

    We are developing multi-particle beam-dynamics simulation codes for RIA driver-linac simulations extending from the low-energy beam transport (LEBT) line to the end of the linac. These codes run on the NERSC parallel supercomputing platforms at LBNL, which allow us to run simulations with large numbers of macroparticles. The codes have the physics capabilities needed for RIA, including transport and acceleration of multiple-charge-state beams, beam-line elements such as high-voltage platforms within the linac, interdigital accelerating structures, charge-stripper foils, and capabilities for handling the effects of machine errors and other off-normal conditions. This year will mark the end of our project. In this paper we present the status of the work, describe some recent additions to the codes, and show some preliminary simulation results.

  14. Recent advances in computational methodology for simulation of mechanical circulatory assist devices

    PubMed Central

    Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek

    2014-01-01

    Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607

  15. Strategic Plan for Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Kimberlyn C. Mousseau

    2011-10-01

    The Nuclear Energy Computational Fluid Dynamics Advanced Modeling and Simulation (NE-CAMS) system is being developed at the Idaho National Laboratory (INL) in collaboration with Bettis Laboratory, Sandia National Laboratory (SNL), Argonne National Laboratory (ANL), Utah State University (USU), and other interested parties with the objective of developing and implementing a comprehensive and readily accessible data and information management system for computational fluid dynamics (CFD) verification and validation (V&V) in support of nuclear energy systems design and safety analysis. The two key objectives of the NE-CAMS effort are to identify, collect, assess, store and maintain high resolution and high quality experimental data and related expert knowledge (metadata) for use in CFD V&V assessments specific to the nuclear energy field and to establish a working relationship with the U.S. Nuclear Regulatory Commission (NRC) to develop a CFD V&V database, including benchmark cases, that addresses and supports the associated NRC regulations and policies on the use of CFD analysis. In particular, the NE-CAMS system will support the Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program, which aims to develop and deploy advanced modeling and simulation methods and computational tools for reliable numerical simulation of nuclear reactor systems for design and safety analysis. Primary NE-CAMS Elements There are four primary elements of the NE-CAMS knowledge base designed to support computer modeling and simulation in the nuclear energy arena as listed below. Element 1. The database will contain experimental data that can be used for CFD validation that is relevant to nuclear reactor and plant processes, particularly those important to the nuclear industry and the NRC. Element 2. Qualification standards for data evaluation and classification will be incorporated and applied such that validation data sets will result in well

  16. Modeling emergency department operations using advanced computer simulation systems.

    PubMed

    Saunders, C E; Makens, P K; Leblanc, L J

    1989-02-01

    We developed a computer simulation model of emergency department operations using simulation software. This model uses multiple levels of preemptive patient priority; assigns each patient to an individual nurse and physician; incorporates all standard tests, procedures, and consultations; and allows patient service processes to proceed simultaneously, sequentially, repetitively, or a combination of these. Selected input data, including the number of physicians, nurses, and treatment beds, and the blood test turnaround time, then were varied systematically to determine their simulated effect on patient throughput time, selected queue sizes, and rates of resource utilization. Patient throughput time varied directly with laboratory service times and inversely with the number of physician or nurse servers. Resource utilization rates varied inversely with resource availability, and patient waiting time and patient throughput time varied indirectly with the level of patient acuity. The simulation can be animated on a computer monitor, showing simulated patients, specimens, and staff members moving throughout the ED. Computer simulation is a potentially useful tool that can help predict the results of changes in the ED system without actually altering it and may have implications for planning, optimizing resources, and improving the efficiency and quality of care.

  17. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-01

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  18. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    SciTech Connect

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-21

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  19. Comparison of Advanced Distillation Control Methods, Final Technical Report

    SciTech Connect

    Dr. James B. Riggs

    2000-11-30

    Detailed dynamic simulations of three industrial distillation columns (a propylene/propane splitter, a xylene/toluene column, and a depropanizer) have been used to evaluate configuration selections for single-ended and dual-composition control, as well as to compare conventional and advanced control approaches. In addition, a simulator of a main fractionator was used to compare the control performance of conventional and advanced control. For each case considered, the controllers were tuned by using setpoint changes and tested using feed composition upsets. Proportional Integral (PI) control performance was used to evaluate the configuration selection problem. For single ended control, the energy balance configuration was found to yield the best performance. For dual composition control, nine configurations were considered. It was determined that the use of dynamic simulations is required in order to identify the optimum configuration from among the nine possible choices. The optimum configurations were used to evaluate the relative control performance of conventional PI controllers, MPC (Model Predictive Control), PMBC (Process Model-Based Control), and ANN (Artificial Neural Networks) control. It was determined that MPC works best when one product is much more important than the other, while PI was superior when both products were equally important. PMBC and ANN were not found to offer significant advantages over PI and MPC. MPC was found to outperform conventional PI control for the main fractionator. MPC was applied to three industrial columns: one at Phillips Petroleum and two at Union Carbide. In each case, MPC was found to significantly outperform PI controls. The major advantage of the MPC controller is its ability to effectively handle a complex set of constraints and control objectives.

  20. Computational Advances in the Arctic Terrestrial Simulator: Modeling Permafrost Degradation in a Warming Arctic

    NASA Astrophysics Data System (ADS)

    Coon, E.; Berndt, M.; Garimella, R.; Moulton, J. D.; Manzini, G.; Painter, S. L.

    2013-12-01

    The terrestrial Arctic has been a net sink of carbon for thousands of years, but warming trends suggest this may change. As the terrestrial Arctic warms, degradation of the permafrost results in significant melting of the ice wedges that support low-centered polygonal ground. This leads to subsidence of the topography, inversion of the polygonal ground, and restructuring of drainage networks. The change in hydrology and vegetation that result from these processes is poorly understood. Predictive simulation of the fate of this carbon is critical for understanding feedback effects between the terrestrial Arctic and climate change. Simulation of this system at fine scales presents many challenges. Flow and energy equations are solved on both the surface and subsurface domains, and deformation of the soil subsurface must couple with both. Additional processes such as snow, evapo-transpiration, and biogeochemistry supplement this THMC model. While globally implicit coupling methods enable conservation of mass and energy on the combined domain, care must be taken to ensure conservation as the soil subsides and the mesh deforms. Uncertainty in both critical physics of each process model and in coupling to maintain accuracy between processes suggests the need for a versatile many-physics framework. This framework should allow swapping of both processes and constitutive relations, and enable easy numerical experimentation of coupling strategies. Deformation dictates the need for advanced discretizations which maintain accuracy and a mesh framework capable of calculating smooth deformation with remapped fields. And latent heat introduces strong nonlinearities, requiring robust solvers and an efficient globalization strategy. Here we discuss advances as implemented in the Arctic Terrestrial Simulator (ATS), a many-physics framework and collection of physics kernels based upon Amanzi. We demonstrate the deformation capability, conserving mass and energy while simulating soil

  1. Advanced SAR simulator with multi-beam interferometric capabilities

    NASA Astrophysics Data System (ADS)

    Reppucci, Antonio; Márquez, José; Cazcarra, Victor; Ruffini, Giulio

    2014-10-01

    State of the art simulations are of great interest when designing a new instrument, studying the imaging mechanisms due to a given scenario or for inversion algorithm design as they allow to analyze and understand the effects of different instrument configurations and targets compositions. In the framework of the studies about a new instruments devoted to the estimation of the ocean surface movements using Synthetic Aperture Radar along-track interferometry (SAR-ATI) an End-to-End simulator has been developed. The simulator, built in a high modular way to allow easy integration of different processing-features, deals with all the basic operations involved in an end to end scenario. This includes the computation of the position and velocity of the platform (airborne/spaceborne) and the geometric parameters defining the SAR scene, the surface definition, the backscattering computation, the atmospheric attenuation, the instrument configuration, and the simulation of the transmission/reception chains and the raw data. In addition, the simulator provides a inSAR processing suit and a sea surface movement retrieval module. Up to four beams (each one composed by a monostatic and a bistatic channel) can be activated. Each channel provides raw data and SLC images with the possibility of choosing between Strip-map and Scansar modes. Moreover, the software offers the possibility of radiometric sensitivity analysis and error analysis due atmospheric disturbances, instrument-noise, interferogram phase-noise, platform velocity and attitude variations. In this paper, the architecture and the capabilities of this simulator will be presented. Meaningful simulation examples will be shown.

  2. Advances in Constitutive and Failure Models for Sheet Forming Simulation

    NASA Astrophysics Data System (ADS)

    Yoon, Jeong Whan; Stoughton, Thomas B.

    2016-08-01

    Non-Associated Flow Rule (Non-AFR) can be used as a convenient way to account for anisotropic material response in metal deformation processes, making it possible for example, to eliminate the problem of the anomalous yielding in equibiaxial tension that is mistakenly attributed to limitations of the quadratic yield function, but may instead be attributed to the Associated Flow Rule (AFR). Seeing as in Non-AFR based models two separate functions can be adopted for yield and plastic potential, there is no constraint to which models are used to describe each of them. In this work, the flexible combination of two different yield criteria as yield function and plastic potential under Non-AFR is proposed and evaluated. FE simulations were carried so as to verify the accuracy of the material directionalities predicted using these constitutive material models. The stability conditions for non-associated flow connected with the prediction of yield point elongation are also reviewed. Anisotropic distortion hardening is further incorporated under non-associated flow. It has been found that anisotropic hardening makes the noticeable improvements for both earing and spring-back predictions. This presentation is followed by a discussion of the topic of the forming limit & necking, the evidence in favor of stress analysis, and the motivation for the development of a new type of forming limit diagram based on the polar effective plastic strain (PEPS) diagram. In order to connect necking to fracture in metals, the stress-based necking limit is combined with a stress- based fracture criterion in the principal stress, which provides an efficient method for the analysis of necking and fracture limits. The concept for the PEPS diagram is further developed to cover the path-independent PEPS fracture which is compatible with the stress-based fracture approach. Thus this fracture criterion can be utilized to describe the post-necking behavior and to cover nonlinear strain-path. Fracture

  3. Advances in simulation study on organic small molecular solar cells

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Guo, Wenge; Li, Ming; Ma, Wentao; Meng, Sen

    2015-02-01

    Recently, more focuses have been put on organic semiconductors because of its advantages, such as its flexibility, ease of fabrication and potential low cost, etc. The reasons we pay highlight on small molecular photovoltaic material are its ease of purification, easy to adjust and determine structure, easy to assemble range units and get high carrier mobility, etc. Simulation study on organic small molecular solar cells before the experiment can help the researchers find relationship between the efficiency and structure parameters, properties of material, estimate the performance of the device, bring the optimization of guidance. Also, the applicability of the model used in simulation can be discussed by comparison with experimental data. This paper summaries principle, structure, progress of numerical simulation on organic small molecular solar cells.

  4. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  5. Coarse-grained computer simulation of dynamics in thylakoid membranes: methods and opportunities

    PubMed Central

    Schneider, Anna R.; Geissler, Phillip L.

    2013-01-01

    Coarse-grained simulation is a powerful and well-established suite of computational methods for studying structure and dynamics in nanoscale biophysical systems. As our understanding of the plant photosynthetic apparatus has become increasingly nuanced, opportunities have arisen for coarse-grained simulation to complement experiment by testing hypotheses and making predictions. Here, we give an overview of best practices in coarse-grained simulation, with a focus on techniques and results that are applicable to the plant thylakoid membrane–protein system. We also discuss current research topics for which coarse-grained simulation has the potential to play a key role in advancing the field. PMID:24478781

  6. Advancing botnet modeling techniques for military and security simulations

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  7. Advances of Simulation and Expertise Capabilities in CIVA Platform

    NASA Astrophysics Data System (ADS)

    Le Ber, L.; Calmon, P.; Sollier, Th.; Mahaut, S.; Benoist, Ph.

    2006-03-01

    Simulation is more and more widely used by the different actors of industrial NDT. The French Atomic Energy Commission (CEA) launched the development of expertise software for NDT named CIVA which, at its beginning, only contained ultrasonic models from CEA laboratories. CIVA now includes Eddy current simulation tools while present work aims at facilitating integration of algorithms and models from different laboratories and to include X-ray modeling. This communication gives an overview of existing CIVA capabilities and its evolution towards an integration platform.

  8. Advances in Statistical Methods for Substance Abuse Prevention Research

    PubMed Central

    MacKinnon, David P.; Lockwood, Chondra M.

    2010-01-01

    The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467

  9. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    SciTech Connect

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  10. Psychometric and Evidentiary Advances, Opportunities, and Challenges for Simulation-Based Assessment

    ERIC Educational Resources Information Center

    Levy, Roy

    2013-01-01

    This article characterizes the advances, opportunities, and challenges for psychometrics of simulation-based assessments through a lens that views assessment as evidentiary reasoning. Simulation-based tasks offer the prospect for student experiences that differ from traditional assessment. Such tasks may be used to support evidentiary arguments…

  11. Correlation of Simulation Examination to Written Test Scores for Advanced Cardiac Life Support Testing: Prospective Cohort Study

    PubMed Central

    Strom, Suzanne L.; Anderson, Craig L.; Yang, Luanna; Canales, Cecilia; Amin, Alpesh; Lotfipour, Shahram; McCoy, C. Eric; Langdorf, Mark I.

    2015-01-01

    Introduction Traditional Advanced Cardiac Life Support (ACLS) courses are evaluated using written multiple-choice tests. High-fidelity simulation is a widely used adjunct to didactic content, and has been used in many specialties as a training resource as well as an evaluative tool. There are no data to our knowledge that compare simulation examination scores with written test scores for ACLS courses. Objective To compare and correlate a novel high-fidelity simulation-based evaluation with traditional written testing for senior medical students in an ACLS course. Methods We performed a prospective cohort study to determine the correlation between simulation-based evaluation and traditional written testing in a medical school simulation center. Students were tested on a standard acute coronary syndrome/ventricular fibrillation cardiac arrest scenario. Our primary outcome measure was correlation of exam results for 19 volunteer fourth-year medical students after a 32-hour ACLS-based Resuscitation Boot Camp course. Our secondary outcome was comparison of simulation-based vs. written outcome scores. Results The composite average score on the written evaluation was substantially higher (93.6%) than the simulation performance score (81.3%, absolute difference 12.3%, 95% CI [10.6–14.0%], p<0.00005). We found a statistically significant moderate correlation between simulation scenario test performance and traditional written testing (Pearson r=0.48, p=0.04), validating the new evaluation method. Conclusion Simulation-based ACLS evaluation methods correlate with traditional written testing and demonstrate resuscitation knowledge and skills. Simulation may be a more discriminating and challenging testing method, as students scored higher on written evaluation methods compared to simulation. PMID:26594288

  12. Advanced Simulation and Computing Co-Design Strategy

    SciTech Connect

    Ang, James A.; Hoang, Thuc T.; Kelly, Suzanne M.; McPherson, Allen; Neely, Rob

    2015-11-01

    This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

  13. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  14. Recent Advances in Underwater Acoustic Modelling and Simulation

    NASA Astrophysics Data System (ADS)

    ETTER, P. C.

    2001-02-01

    A comprehensive review of international developments in underwater acoustic modelling is used to construct an updated technology baseline containing 107 propagation models, 16 noise models, 17 reverberation models and 25 sonar performance models. This updated technology baseline represents a 30% increase over a previous baseline published in 1996. When executed in higher-level simulations, these models can generate predictive and diagnostic outputs that are useful to acoustical oceanographers or sonar technologists in the analysis of complex systems operating in the undersea environment. Recent modelling developments described in the technical literature suggest two principal areas of application: low-frequency, inverse acoustics in deep water; and high-frequency, bottom-interacting acoustics in coastal regions. Rapid changes in global geopolitics have opened new avenues for collaboration, thereby facilitating the transfer of modelling and simulation technologies among members of the international community. This accelerated technology transfer has created new imperatives for international standards in modelling and simulation architectures. National and international activities to promote interoperability among modelling and simulation efforts in government, industry and academia are reviewed and discussed.

  15. Physics-based simulation models for EBSD: advances and challenges

    NASA Astrophysics Data System (ADS)

    Winkelmann, A.; Nolze, G.; Vos, M.; Salvat-Pujol, F.; Werner, W. S. M.

    2016-02-01

    EBSD has evolved into an effective tool for microstructure investigations in the scanning electron microscope. The purpose of this contribution is to give an overview of various simulation approaches for EBSD Kikuchi patterns and to discuss some of the underlying physical mechanisms.

  16. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  17. Design tradeoffs in the development of the advanced multispectral simulation test acceptance resource (AMSTAR) HWIL facilities

    NASA Astrophysics Data System (ADS)

    LeSueur, Kenneth G.; Almendinger, Frank J.

    2007-04-01

    The Army's Advanced Multispectral Simulation Test Acceptance Resource (AMSTAR) is a suite of missile Hardware-In-the-Loop (HWIL) simulation / test capabilities designed to support testing from concept through production. This paper presents the design tradeoffs that were conducted in the development of the AMSTAR sensor stimulators and the flight motion simulators. The AMSTAR facility design includes systems to stimulate each of the Millimeter Wave (MMW), Infrared (IR), and Semi-Active Laser (SAL) sensors. The flight motion simulator (FMS) performance was key to the success of the simulation but required many concessions to accommodate the design considerations for the tri-mode stimulation systems.

  18. Simulation studies of the impact of advanced observing systems on numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Kalnay, E.; Susskind, J.; Reuter, D.; Baker, W. E.; Halem, M.

    1984-01-01

    To study the potential impact of advanced passive sounders and lidar temperature, pressure, humidity, and wind observing systems on large-scale numerical weather prediction, a series of realistic simulation studies between the European Center for medium-range weather forecasts, the National Meteorological Center, and the Goddard Laboratory for Atmospheric Sciences is conducted. The project attempts to avoid the unrealistic character of earlier simulation studies. The previous simulation studies and real-data impact tests are reviewed and the design of the current simulation system is described. Consideration is given to the simulation of observations of space-based sounding systems.

  19. Interim Service ISDN Satellite (ISIS) simulator development for advanced satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.

    1992-01-01

    The simulation development associated with the network models of both the Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures is documented. The ISIS Network Model design represents satellite systems like the Advanced Communications Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) Program, moves all control and switching functions on-board the next generation ISDN communications satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete event simulation experiments will be performed with these models using various traffic scenarios, design parameters, and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  20. Full f gyrokinetic method for particle simulation of tokamak transport

    SciTech Connect

    Heikkinen, J.A. Janhunen, S.J.; Kiviniemi, T.P.; Ogando, F.

    2008-05-10

    A gyrokinetic particle-in-cell approach with direct implicit construction of the coefficient matrix of the Poisson equation from ion polarization and electron parallel nonlinearity is described and applied in global electrostatic toroidal plasma transport simulations. The method is applicable for calculation of the evolution of particle distribution function f including as special cases strong plasma pressure profile evolution by transport and formation of neoclassical flows. This is made feasible by full f formulation and by recording the charge density changes due to the ion polarization drift and electron acceleration along the local magnetic field while particles are advanced. The code has been validated against the linear predictions of the unstable ion temperature gradient mode growth rates and frequencies. Convergence and saturation in both turbulent and neoclassical limit of the ion heat conductivity is obtained with numerical noise well suppressed by a sufficiently large number of simulation particles. A first global full f validation of the neoclassical radial electric field in the presence of turbulence for a heated collisional tokamak plasma is obtained. At high Mach number (M{sub p}{approx}1) of the poloidal flow, the radial electric field is significantly enhanced over the standard neoclassical prediction. The neoclassical radial electric field together with the related GAM oscillations is found to regulate the turbulent heat and particle diffusion levels particularly strongly in a large aspect ratio tokamak at low plasma current.

  1. Recent advances in semi-analytical scattering models for NDT simulation

    NASA Astrophysics Data System (ADS)

    Darmon, M.; Chatillon, S.; Mahaut, S.; Calmon, P.; Fradkin, L. J.; Zernov, V.

    2011-01-01

    For several years, CEA-LIST and partners have been developing ultrasonic simulation tools with the aim of modelling non-destructive evaluation. The existing ultrasonic modules allow us to simulate fully real ultrasonic inspection scenarios in a range of applications which requires the computation of the propagated beam, as well as its interaction with flaws. To fulfil requirements of an intensive use (for parametric studies), the choice has been made to adopt mainly analytical approximate or exact methods to model the scattering of ultrasound by flaws. The applied analytical theories (Kirchhoff and Born approximations, GTD, SOV...) were already described in previous GDR communication. Over the years, this "semi-analytical" approach has been enriched by adaptations and improvements of the existing models or by new models, in order to extend the applicability of the simulation tools. This paper is devoted to the following recent advances performed in the framework of this approach: The SOV method based on the exact analytical model for the scattering from a cylindrical cavity has been extended in 3D to account for field variations along the cylinder. This new 3D model leads to an improvement in simulation of small side-drilled holes. Concerning the geometrical theories of diffraction (GTD), subroutines for calculation of the 2D wedge diffraction coefficients (for bulk or Rayleigh incident waves) have been developed by the Waves and Fields Group and uniform corrections (UAT and UTD) are under investigation. Modelling of the contribution of the head wave and creeping wave to the echoes arising from a wedge. Numerous experimental validations of the developed models are provided. New possibilities offered by these new developments are emphasized.

  2. Simulating data processing for an Advanced Ion Mobility Mass Spectrometer

    SciTech Connect

    Chavarría-Miranda, Daniel; Clowers, Brian H.; Anderson, Gordon A.; Belov, Mikhail E.

    2007-11-03

    We have designed and implemented a Cray XD-1-based sim- ulation of data capture and signal processing for an ad- vanced Ion Mobility mass spectrometer (Hadamard trans- form Ion Mobility). Our simulation is a hybrid application that uses both an FPGA component and a CPU-based soft- ware component to simulate Ion Mobility mass spectrome- try data processing. The FPGA component includes data capture and accumulation, as well as a more sophisticated deconvolution algorithm based on a PNNL-developed en- hancement to standard Hadamard transform Ion Mobility spectrometry. The software portion is in charge of stream- ing data to the FPGA and collecting results. We expect the computational and memory addressing logic of the FPGA component to be portable to an instrument-attached FPGA board that can be interfaced with a Hadamard transform Ion Mobility mass spectrometer.

  3. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ RE; CANDY J; HINTON FL; ESTRADA-MILA C; KINSEY JE

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or a globally with physical profile variation. Rohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, plasma pinches and impurity flow, and simulations at fixed flow rather than fixed gradient are illustrated and discussed.

  4. Bluff Body Flow Simulation Using a Vortex Element Method

    SciTech Connect

    Anthony Leonard; Phillippe Chatelain; Michael Rebel

    2004-09-30

    Heavy ground vehicles, especially those involved in long-haul freight transportation, consume a significant part of our nation's energy supply. it is therefore of utmost importance to improve their efficiency, both to reduce emissions and to decrease reliance on imported oil. At highway speeds, more than half of the power consumed by a typical semi truck goes into overcoming aerodynamic drag, a fraction which increases with speed and crosswind. Thanks to better tools and increased awareness, recent years have seen substantial aerodynamic improvements by the truck industry, such as tractor/trailer height matching, radiator area reduction, and swept fairings. However, there remains substantial room for improvement as understanding of turbulent fluid dynamics grows. The group's research effort focused on vortex particle methods, a novel approach for computational fluid dynamics (CFD). Where common CFD methods solve or model the Navier-Stokes equations on a grid which stretches from the truck surface outward, vortex particle methods solve the vorticity equation on a Lagrangian basis of smooth particles and do not require a grid. They worked to advance the state of the art in vortex particle methods, improving their ability to handle the complicated, high Reynolds number flow around heavy vehicles. Specific challenges that they have addressed include finding strategies to accurate capture vorticity generation and resultant forces at the truck wall, handling the aerodynamics of spinning bodies such as tires, application of the method to the GTS model, computation time reduction through improved integration methods, a closest point transform for particle method in complex geometrics, and work on large eddy simulation (LES) turbulence modeling.

  5. Comparison of advanced distillation control methods. Fourth annual report

    SciTech Connect

    Riggs, J.B.

    1998-09-01

    Detailed dynamic simulations of three industrial columns (a propylene/propane splitter, a xylene/toluene column, and a depropanizer) have been used to evaluate configuration selection for single-ended and dual-composition control as well as compare conventional and advanced control approaches. For each case considered, the controllers were tuned by using setpoint changes and tested using feed composition upsets. Proportional Integral (PI) control performance was used to evaluate the configuration selection problem. For single ended control, the energy balance configuration was found to yield the best performance. For dual composition control, nine configurations were considered. It was determined that in order to identify the optimum configuration, detailed testing using dynamic simulation is required. The optimum configurations were used to evaluate the control performance of conventional PI controllers, DMC (Dynamic Matrix Control), PMBC (Process Model Based Control), and ANN (Artificial Neural Networks) control. It was determined that DMC works best when one product is much more important than the other while PI was superior when both products were equally important. PMBC and ANN were not found to offer significant advantages over PI and DMC.

  6. Novel glass inspection method for advanced photomask blanks

    NASA Astrophysics Data System (ADS)

    Tanabe, Masaru; Kikuchi, Toshiharu; Hashimoto, Masahiro; Ohkubo, Yasushi

    2007-05-01

    Recently, extremely-high-quality-quartz substrates have been demanded for advancing ArF-lithography. HOYA has developed a novel inspection method for interior defects as well as surface defects. The total internal reflection of the substrate is employed to produce an ideal dark field illumination. The novel inspection method can detect a "nano-pit" of 12nm-EDS, the Equivalent of the Diameter of a Sphere (EDS). It will meet the sensitivity for 32nm node and beyond. Moreover, a type of unique defect is detected, which induces Serious Transmittance Error for Arf-LiTHography. We call it the "STEALTH" defect. It is a killer defect in wafer printing; but it cannot be detected with any conventional inspection in the mask-making process so far. In this paper, the performance of the novel inspection method for quartz substrates and the investigation of "STEALTH" are reported.

  7. Advanced reactor physics methods for heterogeneous reactor cores

    NASA Astrophysics Data System (ADS)

    Thompson, Steven A.

    To maintain the economic viability of nuclear power the industry has begun to emphasize maximizing the efficiency and output of existing nuclear power plants by using longer fuel cycles, stretch power uprates, shorter outage lengths, mixed-oxide (MOX) fuel and more aggressive operating strategies. In order to accommodate these changes, while still satisfying the peaking factor and power envelope requirements necessary to maintain safe operation, more complexity in commercial core designs have been implemented, such as an increase in the number of sub-batches and an increase in the use of both discrete and integral burnable poisons. A consequence of the increased complexity of core designs, as well as the use of MOX fuel, is an increase in the neutronic heterogeneity of the core. Such heterogeneous cores introduce challenges for the current methods that are used for reactor analysis. New methods must be developed to address these deficiencies while still maintaining the computational efficiency of existing reactor analysis methods. In this thesis, advanced core design methodologies are developed to be able to adequately analyze the highly heterogeneous core designs which are currently in use in commercial power reactors. These methodological improvements are being pursued with the goal of not sacrificing the computational efficiency which core designers require. More specifically, the PSU nodal code NEM is being updated to include an SP3 solution option, an advanced transverse leakage option, and a semi-analytical NEM solution option.

  8. Analog-digital simulation of transient-induced logic errors and upset susceptibility of an advanced control system

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Choi, G.; Iyer, R. K.

    1990-01-01

    A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.

  9. Advanced visualization technology for terascale particle accelerator simulations

    SciTech Connect

    Ma, K-L; Schussman, G.; Wilson, B.; Ko, K.; Qiang, J.; Ryne, R.

    2002-11-16

    This paper presents two new hardware-assisted rendering techniques developed for interactive visualization of the terascale data generated from numerical modeling of next generation accelerator designs. The first technique, based on a hybrid rendering approach, makes possible interactive exploration of large-scale particle data from particle beam dynamics modeling. The second technique, based on a compact texture-enhanced representation, exploits the advanced features of commodity graphics cards to achieve perceptually effective visualization of the very dense and complex electromagnetic fields produced from the modeling of reflection and transmission properties of open structures in an accelerator design. Because of the collaborative nature of the overall accelerator modeling project, the visualization technology developed is for both desktop and remote visualization settings. We have tested the techniques using both time varying particle data sets containing up to one billion particle s per time step and electromagnetic field data sets with millions of mesh elements.

  10. Left Ventricular Flow Analysis: Recent Advances in Numerical Methods and Applications in Cardiac Ultrasound

    PubMed Central

    Borazjani, Iman; Westerdale, John; McMahon, Eileen M.; Rajaraman, Prathish K.; Heys, Jeffrey J.

    2013-01-01

    The left ventricle (LV) pumps oxygenated blood from the lungs to the rest of the body through systemic circulation. The efficiency of such a pumping function is dependent on blood flow within the LV chamber. It is therefore crucial to accurately characterize LV hemodynamics. Improved understanding of LV hemodynamics is expected to provide important clinical diagnostic and prognostic information. We review the recent advances in numerical and experimental methods for characterizing LV flows and focus on analysis of intraventricular flow fields by echocardiographic particle image velocimetry (echo-PIV), due to its potential for broad and practical utility. Future research directions to advance patient-specific LV simulations include development of methods capable of resolving heart valves, higher temporal resolution, automated generation of three-dimensional (3D) geometry, and incorporating actual flow measurements into the numerical solution of the 3D cardiovascular fluid dynamics. PMID:23690874

  11. Advanced Simulation Technology to Design Etching Process on CMOS Devices

    NASA Astrophysics Data System (ADS)

    Kuboi, Nobuyuki

    2015-09-01

    Prediction and control of plasma-induced damage is needed to mass-produce high performance CMOS devices. In particular, side-wall (SW) etching with low damage is a key process for the next generation of MOSFETs and FinFETs. To predict and control the damage, we have developed a SiN etching simulation technique for CHxFy/Ar/O2 plasma processes using a three-dimensional (3D) voxel model. This model includes new concepts for the gas transportation in the pattern, detailed surface reactions on the SiN reactive layer divided into several thin slabs and C-F polymer layer dependent on the H/N ratio, and use of ``smart voxels''. We successfully predicted the etching properties such as the etch rate, polymer layer thickness, and selectivity for Si, SiO2, and SiN films along with process variations and demonstrated the 3D damage distribution time-dependently during SW etching on MOSFETs and FinFETs. We confirmed that a large amount of Si damage was caused in the source/drain region with the passage of time in spite of the existing SiO2 layer of 15 nm in the over etch step and the Si fin having been directly damaged by a large amount of high energy H during the removal step of the parasitic fin spacer leading to Si fin damage to a depth of 14 to 18 nm. By analyzing the results of these simulations and our previous simulations, we found that it is important to carefully control the dose of high energy H, incident energy of H, polymer layer thickness, and over-etch time considering the effects of the pattern structure, chamber-wall condition, and wafer open area ratio. In collaboration with Masanaga Fukasawa and Tetsuya Tatsumi, Sony Corporation. We thank Mr. T. Shigetoshi and Mr. T. Kinoshita of Sony Corporation for their assistance with the experiments.

  12. Improvements in the gyrokinetic simulation method

    SciTech Connect

    Matsuda, Y.; Cohen, B.I.; Williams, T.J.

    1991-01-01

    Gyrokinetic particle-in-cell (PIC) simulations have been proven to be an important and useful tool for studying low frequency waves and instabilities below ion cyclotron frequency. The gyrokinetic formalism eliminates the cyclotron motion by analytically averaging the equation of motion in time, while keeping finite-Larmor radius effects, and therefore allows a time step of integration to be significantly longer than the cyclotron period. At the same time the thermal fluctuation level is reduced well below that of a conventional PIC simulation code. Recent simulations have been performed over a number of wave periods to study nonlinear evolution of drift waves and ion-temperature-gradient modes and the associated transport. With about a quarter million particles and a 64 {times} 128 {times} 32 grid in three dimensions, it takes about 100 hours on the Cray-2 single processor to follow the modes to a nonlinear quasi-steady state for relatively strong gradients and strong growth rates. Much more efficient simulations are needed in order to understand these low-frequency waves and the transport associated with them by the use of this tool, and to facilitate the simulation of more weakly unstable plasmas with parameters more relevant to experimental conditions. We have set a goal of achieving an efficiency gain of a factor of 100 on a present-day computer over what has been achieved on the Cray-2 for gyrokinetic simulations. To reach this goal we have begun a project with two components; one is the use of new PIC techniques such as subcycling, orbit-averaging, and semi-implicit algorithms, and the other is the use of massively parallel computers such as the BBN TC200 and the Thinking Machines CM-2. 6 refs.

  13. Microwave Processing of Simulated Advanced Nuclear Fuel Pellets

    SciTech Connect

    D.E. Clark; D.C. Folz

    2010-08-29

    Throughout the three-year project funded by the Department of Energy (DOE) and lead by Virginia Tech (VT), project tasks were modified by consensus to fit the changing needs of the DOE with respect to developing new inert matrix fuel processing techniques. The focus throughout the project was on the use of microwave energy to sinter fully stabilized zirconia pellets using microwave energy and to evaluate the effectiveness of techniques that were developed. Additionally, the research team was to propose fundamental concepts as to processing radioactive fuels based on the effectiveness of the microwave process in sintering the simulated matrix material.

  14. Advanced distributed simulation technology: Digital Voice Gateway Reference Guide

    NASA Astrophysics Data System (ADS)

    Vanhook, Dan; Stadler, Ed

    1994-01-01

    The Digital Voice Gateway (referred to as the 'DVG' in this document) transmits and receives four full duplex encoded speech channels over the Ethernet. The information in this document applies only to DVG's running firmware of the version listed on the title page. This document, previously named Digital Voice Gateway Reference Guide, BBN Systems and Technologies Corporation, Cambridge, MA 02138, was revised for revision 2.00. This new revision changes the network protocol used by the DVG, to comply with the SINCGARS radio simulation (For SIMNET 6.6.1). Because of the extensive changes to revision 2.00 a separate document was created rather than supplying change pages.

  15. Strategic Plan for Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Rich Johnson; Kimberlyn C. Mousseau; Hyung Lee

    2011-09-01

    NE-KAMS knowledge base will assist computational analysts, physics model developers, experimentalists, nuclear reactor designers, and federal regulators by: (1) Establishing accepted standards, requirements and best practices for V&V and UQ of computational models and simulations, (2) Establishing accepted standards and procedures for qualifying and classifying experimental and numerical benchmark data, (3) Providing readily accessible databases for nuclear energy related experimental and numerical benchmark data that can be used in V&V assessments and computational methods development, (4) Providing a searchable knowledge base of information, documents and data on V&V and UQ, and (5) Providing web-enabled applications, tools and utilities for V&V and UQ activities, data assessment and processing, and information and data searches. From its inception, NE-KAMS will directly support nuclear energy research, development and demonstration programs within the U.S. Department of Energy (DOE), including the Consortium for Advanced Simulation of Light Water Reactors (CASL), the Nuclear Energy Advanced Modeling and Simulation (NEAMS), the Light Water Reactor Sustainability (LWRS), the Small Modular Reactors (SMR), and the Next Generation Nuclear Power Plant (NGNP) programs. These programs all involve computational modeling and simulation (M&S) of nuclear reactor systems, components and processes, and it is envisioned that NE-KAMS will help to coordinate and facilitate collaboration and sharing of resources and expertise for V&V and UQ across these programs. In addition, from the outset, NE-KAMS will support the use of computational M&S in the nuclear industry by developing guidelines and recommended practices aimed at quantifying the uncertainty and assessing the applicability of existing analysis models and methods. The NE-KAMS effort will initially focus on supporting the use of computational fluid dynamics (CFD) and thermal hydraulics (T/H) analysis for M&S of nuclear

  16. Some Specific CASL Requirements for Advanced Multiphase Flow Simulation of Light Water Reactors

    SciTech Connect

    R. A. Berry

    2010-11-01

    Because of the diversity of physical phenomena occuring in boiling, flashing, and bubble collapse, and of the length and time scales of LWR systems, it is imperative that the models have the following features: • Both vapor and liquid phases (and noncondensible phases, if present) must be treated as compressible. • Models must be mathematically and numerically well-posed. • The models methodology must be multi-scale. A fundamental derivation of the multiphase governing equation system, that should be used as a basis for advanced multiphase modeling in LWR coolant systems, is given in the Appendix using the ensemble averaging method. The remainder of this work focuses specifically on the compressible, well-posed, and multi-scale requirements of advanced simulation methods for these LWR coolant systems, because without these are the most fundamental aspects, without which widespread advancement cannot be claimed. Because of the expense of developing multiple special-purpose codes and the inherent inability to couple information from the multiple, separate length- and time-scales, efforts within CASL should be focused toward development of a multi-scale approaches to solve those multiphase flow problems relevant to LWR design and safety analysis. Efforts should be aimed at developing well-designed unified physical/mathematical and high-resolution numerical models for compressible, all-speed multiphase flows spanning: (1) Well-posed general mixture level (true multiphase) models for fast transient situations and safety analysis, (2) DNS (Direct Numerical Simulation)-like models to resolve interface level phenmena like flashing and boiling flows, and critical heat flux determination (necessarily including conjugate heat transfer), and (3) Multi-scale methods to resolve both (1) and (2) automatically, depending upon specified mesh resolution, and to couple different flow models (single-phase, multiphase with several velocities and pressures, multiphase with single

  17. Statistical Methods Handbook for Advanced Gas Reactor Fuel Materials

    SciTech Connect

    J. J. Einerson

    2005-05-01

    Fuel materials such as kernels, coated particles, and compacts are being manufactured for experiments simulating service in the next generation of high temperature gas reactors. These must meet predefined acceptance specifications. Many tests are performed for quality assurance, and many of these correspond to criteria that must be met with specified confidence, based on random samples. This report describes the statistical methods to be used. The properties of the tests are discussed, including the risk of false acceptance, the risk of false rejection, and the assumption of normality. Methods for calculating sample sizes are also described.

  18. Langley advanced real-time simulation (ARTS) system

    NASA Technical Reports Server (NTRS)

    Crawford, Daniel J.; Cleveland, Jeff I., II

    1988-01-01

    A system of high-speed digital data networks was developed and installed to support real-time flight simulation at the NASA Langley Research Center. This system, unlike its predecessor, employs intelligence at each network node and uses distributed 10-V signal conversion equipment rather than centralized 100-V equipment. A network switch, which replaces an elaborate system of patch panels, allows the researcher to construct a customized network from the 25 available simulation sites by invoking a computer control statement. The intent of this paper is to provide a coherent functional description of the system. This development required many significant innovations to enhance performance and functionality such as the real-time clock, the network switch, and improvements to the CAMAC network to increase both distances to sites and data rates. The system has been successfully tested at a usable data rate of 24 M. The fiber optic lines allow distances of approximately 1.5 miles from switch to site. Unlike other local networks, CAMAC does not buffer data in blocks. Therefore, time delays in the network are kept below 10 microsec total. This system underwent months of testing and was put into full service in July 1987.

  19. Simulation models and designs for advanced Fischer-Tropsch technology

    SciTech Connect

    Choi, G.N.; Kramer, S.J.; Tam, S.S.

    1995-12-31

    Process designs and economics were developed for three grass-roots indirect Fischer-Tropsch coal liquefaction facilities. A baseline and an alternate upgrading design were developed for a mine-mouth plant located in southern Illinois using Illinois No. 6 coal, and one for a mine-mouth plane located in Wyoming using Power River Basin coal. The alternate design used close-coupled ZSM-5 reactors to upgrade the vapor stream leaving the Fischer-Tropsch reactor. ASPEN process simulation models were developed for all three designs. These results have been reported previously. In this study, the ASPEN process simulation model was enhanced to improve the vapor/liquid equilibrium calculations for the products leaving the slurry bed Fischer-Tropsch reactors. This significantly improved the predictions for the alternate ZSM-5 upgrading design. Another model was developed for the Wyoming coal case using ZSM-5 upgrading of the Fischer-Tropsch reactor vapors. To date, this is the best indirect coal liquefaction case. Sensitivity studies showed that additional cost reductions are possible.

  20. Advanced solid elements for sheet metal forming simulation

    NASA Astrophysics Data System (ADS)

    Mataix, Vicente; Rossi, Riccardo; Oñate, Eugenio; Flores, Fernando G.

    2016-08-01

    The solid-shells are an attractive kind of element for the simulation of forming processes, due to the fact that any kind of generic 3D constitutive law can be employed without any additional hypothesis. The present work consists in the improvement of a triangular prism solid-shell originally developed by Flores[2, 3]. The solid-shell can be used in the analysis of thin/thick shell, undergoing large deformations. The element is formulated in total Lagrangian formulation, and employs the neighbour (adjacent) elements to perform a local patch to enrich the displacement field. In the original formulation a modified right Cauchy-Green deformation tensor (C) is obtained; in the present work a modified deformation gradient (F) is obtained, which allows to generalise the methodology and allows to employ the Pull-Back and Push-Forwards operations. The element is based in three modifications: (a) a classical assumed strain approach for transverse shear strains (b) an assumed strain approach for the in-plane components using information from neighbour elements and (c) an averaging of the volumetric strain over the element. The objective is to use this type of elements for the simulation of shells avoiding transverse shear locking, improving the membrane behaviour of the in-plane triangle and to handle quasi-incompressible materials or materials with isochoric plastic flow.

  1. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ,R.E; CANDY,J; HINTON,F.L; ESTRADA-MILA,C; KINSEY,J.E

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or globally with physical profile variation. Bohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, are illustrated.

  2. LLNL Scientists Use NERSC to Advance Global Aerosol Simulations

    SciTech Connect

    Bergmann, D J; Chuang, C; Rotman, D

    2004-10-13

    While ''greenhouse gases'' have been the focus of climate change research for a number of years, DOE's ''Aerosol Initiative'' is now examining how aerosols (small particles of approximately micron size) affect the climate on both a global and regional scale. Scientists in the Atmospheric Science Division at Lawrence Livermore National Laboratory (LLNL) are using NERSC's IBM supercomputer and LLNL's IMPACT (atmospheric chemistry) model to perform simulations showing the historic effects of sulfur aerosols at a finer spatial resolution than ever done before. Simulations were carried out for five decades, from the 1950s through the 1990s. The results clearly show the effects of the changing global pattern of sulfur emissions. Whereas in 1950 the United States emitted 41 percent of the world's sulfur aerosols, this figure had dropped to 15 percent by 1990, due to conservation and anti-pollution policies. By contrast, the fraction of total sulfur emissions of European origin has only dropped by a factor of 2 and the Asian emission fraction jumped six fold during the same time, from 7 percent in 1950 to 44 percent in 1990. Under a special allocation of computing time provided by the Office of Science INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program, Dan Bergmann, working with a team of LLNL scientists including Cathy Chuang, Philip Cameron-Smith, and Bala Govindasamy, was able to carry out a large number of calculations during the past month, making the aerosol project one of the largest users of NERSC resources. The applications ran on 128 and 256 processors. The objective was to assess the effects of anthropogenic (man-made) sulfate aerosols. The IMPACT model calculates the rate at which SO{sub 2} (a gas emitted by industrial activity) is oxidized and forms particles known as sulfate aerosols. These particles have a short lifespan in the atmosphere, often washing out in about a week. This means that their effects on climate tend to be

  3. Simulation and ground testing with the Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2005-01-01

    The Advanced Video Guidance Sensor (AVGS), an active sensor system that provides near-range 6-degree-of-freedom sensor data, has been developed as part of an automatic rendezvous and docking system for the Demonstration of Autonomous Rendezvous Technology (DART). The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state imager to detect the light returned from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The development of the sensor, through initial prototypes, final prototypes, and three flight units, has required a great deal of testing at every phase, and the different types of testing, their effectiveness, and their results, are presented in this paper, focusing on the testing of the flight units. Testing has improved the sensor's performance.

  4. Advanced wellbore thermal simulator GEOTEMP2 user manual

    SciTech Connect

    Mondy, L.A.; Duda, L.E.

    1984-11-01

    GEOTEMP2 is a wellbore thermal simulator computer code designed for geothermal drilling and production applications. The code treats natural and forced convection and conduction within the wellbore and heat conduction within the surrounding rock matrix. A variety of well operations can be modeled including injection, production, forward, and reverse circulation with gas or liquid, gas or liquid drilling, and two-phase steam injection and production. Well completion with several different casing sizes and cement intervals can be modeled. The code allows variables suchas flow rate to change with time enabling a realistic treatment of well operations. This user manual describes the input required to properly operate the code. Ten sample problems are included which illustrate all the code options. Complete listings of the code and the output of each sample problem are provided.

  5. Methods and systems for advanced spaceport information management

    NASA Technical Reports Server (NTRS)

    Fussell, Ronald M. (Inventor); Ely, Donald W. (Inventor); Meier, Gary M. (Inventor); Halpin, Paul C. (Inventor); Meade, Phillip T. (Inventor); Jacobson, Craig A. (Inventor); Blackwell-Thompson, Charlie (Inventor)

    2007-01-01

    Advanced spaceport information management methods and systems are disclosed. In one embodiment, a method includes coupling a test system to the payload and transmitting one or more test signals that emulate an anticipated condition from the test system to the payload. One or more responsive signals are received from the payload into the test system and are analyzed to determine whether one or more of the responsive signals comprises an anomalous signal. At least one of the steps of transmitting, receiving, analyzing and determining includes transmitting at least one of the test signals and the responsive signals via a communications link from a payload processing facility to a remotely located facility. In one particular embodiment, the communications link is an Internet link from a payload processing facility to a remotely located facility (e.g. a launch facility, university, etc.).

  6. Methods and Systems for Advanced Spaceport Information Management

    NASA Technical Reports Server (NTRS)

    Fussell, Ronald M. (Inventor); Ely, Donald W. (Inventor); Meier, Gary M. (Inventor); Halpin, Paul C. (Inventor); Meade, Phillip T. (Inventor); Jacobson, Craig A. (Inventor); Blackwell-Thompson, Charlie (Inventor)

    2007-01-01

    Advanced spaceport information management methods and systems are disclosed. In one embodiment, a method includes coupling a test system to the payload and transmitting one or more test signals that emulate an anticipated condition from the test system to the payload. One or more responsive signals are received from the payload into the test system and are analyzed to determine whether one or more of the responsive signals comprises an anomalous signal. At least one of the steps of transmitting, receiving, analyzing and determining includes transmitting at least one of the test signals and the responsive signals via a communications link from a payload processing facility to a remotely located facility. In one particular embodiment, the communications link is an Internet link from a payload processing facility to a remotely located facility (e.g. a launch facility, university, etc.).

  7. Using CONFIG for Simulation of Operation of Water Recovery Subsystems for Advanced Control Software Evaluation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv

    2002-01-01

    A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center

  8. Design and Test of Advanced Thermal Simulators for an Alkali Metal-Cooled Reactor Simulator

    NASA Technical Reports Server (NTRS)

    Garber, Anne E.; Dickens, Ricky E.

    2011-01-01

    The Early Flight Fission Test Facility (EFF-TF) at NASA Marshall Space Flight Center (MSFC) has as one of its primary missions the development and testing of fission reactor simulators for space applications. A key component in these simulated reactors is the thermal simulator, designed to closely mimic the form and function of a nuclear fuel pin using electric heating. Continuing effort has been made to design simple, robust, inexpensive thermal simulators that closely match the steady-state and transient performance of a nuclear fuel pin. A series of these simulators have been designed, developed, fabricated and tested individually and in a number of simulated reactor systems at the EFF-TF. The purpose of the thermal simulators developed under the Fission Surface Power (FSP) task is to ensure that non-nuclear testing can be performed at sufficiently high fidelity to allow a cost-effective qualification and acceptance strategy to be used. Prototype thermal simulator design is founded on the baseline Fission Surface Power reactor design. Recent efforts have been focused on the design, fabrication and test of a prototype thermal simulator appropriate for use in the Technology Demonstration Unit (TDU). While designing the thermal simulators described in this paper, effort were made to improve the axial power profile matching of the thermal simulators. Simultaneously, a search was conducted for graphite materials with higher resistivities than had been employed in the past. The combination of these two efforts resulted in the creation of thermal simulators with power capacities of 2300-3300 W per unit. Six of these elements were installed in a simulated core and tested in the alkali metal-cooled Fission Surface Power Primary Test Circuit (FSP-PTC) at a variety of liquid metal flow rates and temperatures. This paper documents the design of the thermal simulators, test program, and test results.

  9. Interactive methods for exploring particle simulation data

    SciTech Connect

    Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.

    2004-05-01

    In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.

  10. Review of wind simulation methods for horizontal-axis wind turbine analysis

    NASA Astrophysics Data System (ADS)

    Powell, D. C.; Connell, J. R.

    1986-06-01

    This report reviews three reports on simulation of winds for use in wind turbine fatigue analysis. The three reports are presumed to represent the state of the art. The Purdue and Sandia methods simulate correlated wind data at two points rotating as on the rotor of a horizontal-axis wind turbine. The PNL method at present simulates only one point, which rotates either as on a horizontal-axis wind turbine blade or as on a vertical-axis wind turbine blade. The spectra of simulated data are presented from the Sandia and PNL models under comparable input conditions, and the energy calculated in the rotational spikes in the spectra by the two models is compared. Although agreement between the two methods is not impressive at this time, improvement of the Sandia and PNL methods is recommended as the best way to advance the state of the art. Physical deficiencies of the models are cited in the report and technical recommendations are made for improvement. The report also reviews two general methods for simulating single-point data, called the harmonic method and the white noise method. The harmonic method, which is the basis of all three specific methods reviewed, is recommended over the white noise method in simulating winds for wind turbine analysis.

  11. Advances in edge-diffraction modeling for virtual-acoustic simulations

    NASA Astrophysics Data System (ADS)

    Calamia, Paul Thomas

    In recent years there has been growing interest in modeling sound propagation in complex, three-dimensional (3D) virtual environments. With diverse applications for the military, the gaming industry, psychoacoustics researchers, architectural acousticians, and others, advances in computing power and 3D audio-rendering techniques have driven research and development aimed at closing the gap between the auralization and visualization of virtual spaces. To this end, this thesis focuses on improving the physical and perceptual realism of sound-field simulations in virtual environments through advances in edge-diffraction modeling. To model sound propagation in virtual environments, acoustical simulation tools commonly rely on geometrical-acoustics (GA) techniques that assume asymptotically high frequencies, large flat surfaces, and infinitely thin ray-like propagation paths. Such techniques can be augmented with diffraction modeling to compensate for the effect of surface size on the strength and directivity of a reflection, to allow for propagation around obstacles and into shadow zones, and to maintain soundfield continuity across reflection and shadow boundaries. Using a time-domain, line-integral formulation of the Biot-Tolstoy-Medwin (BTM) diffraction expression, this thesis explores various aspects of diffraction calculations for virtual-acoustic simulations. Specifically, we first analyze the periodic singularity of the BTM integrand and describe the relationship between the singularities and higher-order reflections within wedges with open angle less than 180°. Coupled with analytical approximations for the BTM expression, this analysis allows for accurate numerical computations and a continuous sound field in the vicinity of an arbitrary wedge geometry insonified by a point source. Second, we describe an edge-subdivision strategy that allows for fast diffraction calculations with low error relative to a numerically more accurate solution. Third, to address

  12. Methodological advances: using greenhouses to simulate climate change scenarios.

    PubMed

    Morales, F; Pascual, I; Sánchez-Díaz, M; Aguirreolea, J; Irigoyen, J J; Goicoechea, N; Antolín, M C; Oyarzun, M; Urdiain, A

    2014-09-01

    Human activities are increasing atmospheric CO2 concentration and temperature. Related to this global warming, periods of low water availability are also expected to increase. Thus, CO2 concentration, temperature and water availability are three of the main factors related to climate change that potentially may influence crops and ecosystems. In this report, we describe the use of growth chamber - greenhouses (GCG) and temperature gradient greenhouses (TGG) to simulate climate change scenarios and to investigate possible plant responses. In the GCG, CO2 concentration, temperature and water availability are set to act simultaneously, enabling comparison of a current situation with a future one. Other characteristics of the GCG are a relative large space of work, fine control of the relative humidity, plant fertirrigation and the possibility of light supplementation, within the photosynthetic active radiation (PAR) region and/or with ultraviolet-B (UV-B) light. In the TGG, the three above-mentioned factors can act independently or in interaction, enabling more mechanistic studies aimed to elucidate the limiting factor(s) responsible for a given plant response. Examples of experiments, including some aimed to study photosynthetic acclimation, a phenomenon that leads to decreased photosynthetic capacity under long-term exposures to elevated CO2, using GCG and TGG are reported. PMID:25113448

  13. State of the Art Assessment of Simulation in Advanced Materials Development

    NASA Technical Reports Server (NTRS)

    Wise, Kristopher E.

    2008-01-01

    Advances in both the underlying theory and in the practical implementation of molecular modeling techniques have increased their value in the advanced materials development process. The objective is to accelerate the maturation of emerging materials by tightly integrating modeling with the other critical processes: synthesis, processing, and characterization. The aims of this report are to summarize the state of the art of existing modeling tools and to highlight a number of areas in which additional development is required. In an effort to maintain focus and limit length, this survey is restricted to classical simulation techniques including molecular dynamics and Monte Carlo simulations.

  14. Preliminary simulation of an advanced, hingless rotor XV-15 tilt-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Mcveigh, M. A.

    1976-01-01

    The feasibility of the tilt-rotor concept was verified through investigation of the performance, stability and handling qualities of the XV-15 tilt rotor. The rotors were replaced by advanced-technology fiberglass/composite hingless rotors of larger diameter, combined with an advanced integrated fly-by-wire control system. A parametric simulation model of the HRXV-15 was developed, model was used to define acceptable preliminary ranges of primary and secondary control schedules as functions of the flight parameters, to evaluate performance, flying qualities and structural loads, and to have a Boeing-Vertol pilot conduct a simulated flight test evaluation of the aircraft.

  15. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  16. Advanced superposition methods for high speed turbopump vibration analysis

    NASA Technical Reports Server (NTRS)

    Nielson, C. E.; Campany, A. D.

    1981-01-01

    The small, high pressure Mark 48 liquid hydrogen turbopump was analyzed and dynamically tested to determine the cause of high speed vibration at an operating speed of 92,400 rpm. This approaches the design point operating speed of 95,000 rpm. The initial dynamic analysis in the design stage and subsequent further analysis of the rotor only dynamics failed to predict the vibration characteristics found during testing. An advanced procedure for dynamics analysis was used in this investigation. The procedure involves developing accurate dynamic models of the rotor assembly and casing assembly by finite element analysis. The dynamically instrumented assemblies are independently rap tested to verify the analytical models. The verified models are then combined by modal superposition techniques to develop a completed turbopump model where dynamic characteristics are determined. The results of the dynamic testing and analysis obtained are presented and methods of moving the high speed vibration characteristics to speeds above the operating range are recommended. Recommendations for use of these advanced dynamic analysis procedures during initial design phases are given.

  17. PACO: PArticle COunting Method To Enforce Concentrations in Dynamic Simulations.

    PubMed

    Berti, Claudio; Furini, Simone; Gillespie, Dirk

    2016-03-01

    We present PACO, a computationally efficient method for concentration boundary conditions in nonequilibrium particle simulations. Because it requires only particle counting, its computational effort is significantly smaller than other methods. PACO enables Brownian dynamics simulations of micromolar electrolytes (3 orders of magnitude lower than previously simulated). PACO for Brownian dynamics is integrated in the BROWNIES package (www.phys.rush.edu/BROWNIES). We also introduce a molecular dynamics PACO implementation that allows for very accurate control of concentration gradients.

  18. Exploration of Advanced Probabilistic and Stochastic Design Methods

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.

    2003-01-01

    The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and

  19. Advance in methods studying the pharmacokinetics of polyphenols.

    PubMed

    Santos, Ana C; Costa, G; Veiga, F; Figueiredo, I V; Batista, M T; Ribeiro, António J

    2014-01-01

    Significant advances have been achieved during the past decade concerning the metabolism of polyphenol compounds in vitro, but scarce data has been presented about what really happens in vivo. Many studies on polyphenols to date have focused on the bioactivity of one specific molecule in aglycone form, often at supraphysiological doses, whereas foods contain complex, often poorly characterized mixtures with multiple additive or interfering activities. Whereas most studies up to the middle-late 1990s measured total aglycones in plasma and urine, after chemical or enzymatic deconjugation, or both, several recent works now report the polyphenol conjugate composition of plasma, urine, feces and/or tissues, after the administration of pure polyphenols or polyphenol-rich matrices. HPLC methods with electrochemical, mass spectrometric and fluorescence detection have adequate sensitivity. LC/UV-Vis methods have also been widely reported, but they are much less sensitive. Compared with electro-chemical and fluorescence detection, MS can quantify analytes without chromatographic separation, which leads to high throughput, presenting itself as the best choice to date. Regarding the experimental model to monitor the bioavailability of phenolic compounds, most published studies are based on human and animal models, with the majority using rodents, primates and recently the nematode Caenorhabditis elegans. This review focuses on the fundamentals of pharmacokinetic methods from the last 15 years and how the results are evaluated and validated. The types of analytical methods, animal models and biological matrices were used to better elucidate pharmacokinetics of polyphenols.

  20. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  1. Review: Advances in delta-subsidence research using satellite methods

    NASA Astrophysics Data System (ADS)

    Higgins, Stephanie A.

    2016-05-01

    Most of the world's major river deltas are sinking relative to local sea level. The effects of subsidence can include aquifer salinization, infrastructure damage, increased vulnerability to flooding and storm surges, and permanent inundation of low-lying land. Consequently, determining the relative importance of natural vs. anthropogenic pressures in driving delta subsidence is a topic of ongoing research. This article presents a review of knowledge with respect to delta surface-elevation loss. The field is rapidly advancing due to applications of space-based techniques: InSAR (interferometric synthetic aperture radar), GPS (global positioning system), and satellite ocean altimetry. These techniques have shed new light on a variety of subsidence processes, including tectonics, isostatic adjustment, and the spatial and temporal variability of sediment compaction. They also confirm that subsidence associated with fluid extraction can outpace sea-level rise by up to two orders of magnitude, resulting in effective sea-level rise that is one-hundred times faster than the global average rate. In coming years, space-based and airborne instruments will be critical in providing near-real-time monitoring to facilitate management decisions in sinking deltas. However, ground-based observations continue to be necessary for generating complete measurements of surface-elevation change. Numerical modeling should seek to simulate couplings between subsidence processes for greater predictive power.

  2. Technology, Pedagogy, and Epistemology: Opportunities and Challenges of Using Computer Modeling and Simulation Tools in Elementary Science Methods

    ERIC Educational Resources Information Center

    Schwarz, Christina V.; Meyer, Jason; Sharma, Ajay

    2007-01-01

    This study infused computer modeling and simulation tools in a 1-semester undergraduate elementary science methods course to advance preservice teachers' understandings of computer software use in science teaching and to help them learn important aspects of pedagogy and epistemology. Preservice teachers used computer modeling and simulation tools…

  3. Advanced Motion Compensation Methods for Intravital Optical Microscopy.

    PubMed

    Vinegoni, Claudio; Lee, Sungon; Feruglio, Paolo Fumene; Weissleder, Ralph

    2014-03-01

    Intravital microscopy has emerged in the recent decade as an indispensible imaging modality for the study of the micro-dynamics of biological processes in live animals. Technical advancements in imaging techniques and hardware components, combined with the development of novel targeted probes and new mice models, have enabled us to address long-standing questions in several biology areas such as oncology, cell biology, immunology and neuroscience. As the instrument resolution has increased, physiological motion activities have become a major obstacle that prevents imaging live animals at resolutions analogue to the ones obtained in vitro. Motion compensation techniques aim at reducing this gap and can effectively increase the in vivo resolution. This paper provides a technical review of some of the latest developments in motion compensation methods, providing organ specific solutions.

  4. Advanced Motion Compensation Methods for Intravital Optical Microscopy

    PubMed Central

    Vinegoni, Claudio; Lee, Sungon; Feruglio, Paolo Fumene; Weissleder, Ralph

    2013-01-01

    Intravital microscopy has emerged in the recent decade as an indispensible imaging modality for the study of the micro-dynamics of biological processes in live animals. Technical advancements in imaging techniques and hardware components, combined with the development of novel targeted probes and new mice models, have enabled us to address long-standing questions in several biology areas such as oncology, cell biology, immunology and neuroscience. As the instrument resolution has increased, physiological motion activities have become a major obstacle that prevents imaging live animals at resolutions analogue to the ones obtained in vitro. Motion compensation techniques aim at reducing this gap and can effectively increase the in vivo resolution. This paper provides a technical review of some of the latest developments in motion compensation methods, providing organ specific solutions. PMID:24273405

  5. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  6. The frontal method in hydrodynamics simulations

    USGS Publications Warehouse

    Walters, R.A.

    1980-01-01

    The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.

  7. Advanced electrochemical methods for characterizing the performance of organic coatings

    NASA Astrophysics Data System (ADS)

    Upadhyay, Vinod

    Advanced electrochemical techniques such as electrochemical impedance spectroscopy (EIS), electrochemical noise method (ENM) and coulometry as tools to study and extract information about the coating system is the focus of this thesis. This thesis explored three areas of research. In all the three research areas, advanced electrochemical techniques were used to extract information and understand the coating system. The first area was to use EIS and coulometric technique for extracting information using AC-DC-AC method. It was examined whether the total charge passing through the coating during the DC polarization step of AC-DC-AC determines coating failure. An almost constant total amount of charge transfer was required by the coating before it failed and was independent of the applied DC polarization. The second area focused in this thesis was to investigate if embedded sensors in coatings are sensitive enough to monitor changes in environmental conditions and to locate defects in coatings by electrochemical means. Influence of topcoat on embedded sensor performance was also studied. It was observed that the embedded sensors can distinguish varying environmental conditions and locate defects in coatings. Topcoat could influence measurements made using embedded sensors and the choice of topcoat could be very important in the successful use of embedded sensors. The third area of research of this thesis work was to examine systematically polymer-structure coating property relationships using electrochemical impedance spectroscopy. It was observed that the polymer modifications could alter the electrochemical properties of the coating films. Moreover, it was also observed that by cyclic wet-dry capacitance measurement using aqueous electrolyte and ionic liquid, ranking of the stability of organic polymer films could be performed.

  8. Numerical Evaluation of Fluid Mixing Phenomena in Boiling Water Reactor Using Advanced Interface Tracking Method

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Takase, Kazuyuki

    Thermal-hydraulic design of the current boiling water reactor (BWR) is performed with the subchannel analysis codes which incorporated the correlations based on empirical results including actual-size tests. Then, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. In this situation, development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason, we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, a detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. In this paper, firstly, we tried to verify the TPFIT code by comparing it with the existing 2-channel air-water mixing experimental results. Secondary, the TPFIT code was applied to simulation of steam-water two-phase flow in a model of two subchannels of a current BWRs and FLWRs rod bundle. The fluid mixing was observed at a gap between the subchannels. The existing two-phase flow correlation for fluid mixing is evaluated using detailed numerical simulation data. This data indicates that pressure difference between fluid channels is responsible for the fluid mixing, and thus the effects of the time average pressure difference and fluctuations must be incorporated in the two-phase flow correlation for fluid mixing. When inlet quality ratio of subchannels is relatively large, it is understood that evaluation precision of the existing two-phase flow correlations for fluid mixing are relatively low.

  9. [Research advances in simulating land water-carbon coupling].

    PubMed

    Liu, Ning; Sun, Peng-Sen; Liu, Shi-Rong

    2012-11-01

    The increasing demand of adaptive management of land, forest, and water resources under the background of global change and water resources crisis has promoted the comprehensive study of coupling ecosystem water and carbon cycles and their restrictive relations. To construct the water-carbon coupling model and to approach the ecosystem water-carbon balance and its interactive response mechanisms under climate change at multiple spatiotemporal scales is nowadays a major concern. After reviewing the coupling relationships of water and carbon at various scales, this paper explored the implications and estimation methods of the key processes and related parameters of water-carbon coupling, the construction of evapotranspiration model at large scale based on RS, and the importance of this model in water-carbon coupling researches. The applications of assimilative multivariate data in water-carbon coupling researches under future climate change scenarios were also prospected.

  10. Iron Resources and Oceanic Nutrients: Advancement of Global Environment Simulations

    NASA Astrophysics Data System (ADS)

    Debaar, H. J.

    2002-12-01

    simulated. An existing plankton ecosystem model already well predicts limitation by four nutrients (N, P, Si, Fe) of two algal groups (diatoms and nanoplankton) including export and CO2 air/sea exchange. This is being expanded with 3 other groups of algae and DMS(P)pathways. Next this extended ecosystem model is being simplified while maintaining reliable output for export and CO2/DMS gas exchange. This unit will then be put into two existing OBCM's. Inputs of Fe from above and below into the oceans have been modeled. Moreover a simple global Fe cycling model has been verified versus field data and insights. Two different OBCM's with same upper ocean ecosystem/DMS unit and Fe cycling will be verified versus pre-industrial and present conditions. Next climate change scenario's, notably changes in Fe inputs, will be run, with special attention to climatic feedbacks (warming) on the oceanic cycles and fluxes.

  11. Development of semiclassical molecular dynamics simulation method.

    PubMed

    Nakamura, Hiroki; Nanbu, Shinkoh; Teranishi, Yoshiaki; Ohta, Ayumi

    2016-04-28

    Various quantum mechanical effects such as nonadiabatic transitions, quantum mechanical tunneling and coherence play crucial roles in a variety of chemical and biological systems. In this paper, we propose a method to incorporate tunneling effects into the molecular dynamics (MD) method, which is purely based on classical mechanics. Caustics, which define the boundary between classically allowed and forbidden regions, are detected along classical trajectories and the optimal tunneling path with minimum action is determined by starting from each appropriate caustic. The real phase associated with tunneling can also be estimated. Numerical demonstration with use of a simple collinear chemical reaction O + HCl → OH + Cl is presented in order to help the reader to well comprehend the method proposed here. Generalization to the on-the-fly ab initio version is rather straightforward. By treating the nonadiabatic transitions at conical intersections by the Zhu-Nakamura theory, new semiclassical MD methods can be developed. PMID:27067383

  12. Development of semiclassical molecular dynamics simulation method.

    PubMed

    Nakamura, Hiroki; Nanbu, Shinkoh; Teranishi, Yoshiaki; Ohta, Ayumi

    2016-04-28

    Various quantum mechanical effects such as nonadiabatic transitions, quantum mechanical tunneling and coherence play crucial roles in a variety of chemical and biological systems. In this paper, we propose a method to incorporate tunneling effects into the molecular dynamics (MD) method, which is purely based on classical mechanics. Caustics, which define the boundary between classically allowed and forbidden regions, are detected along classical trajectories and the optimal tunneling path with minimum action is determined by starting from each appropriate caustic. The real phase associated with tunneling can also be estimated. Numerical demonstration with use of a simple collinear chemical reaction O + HCl → OH + Cl is presented in order to help the reader to well comprehend the method proposed here. Generalization to the on-the-fly ab initio version is rather straightforward. By treating the nonadiabatic transitions at conical intersections by the Zhu-Nakamura theory, new semiclassical MD methods can be developed.

  13. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  14. Numerical modeling of spray combustion with an advanced VOF method

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  15. The role of experience and advanced training on performance in a motorcycle simulator.

    PubMed

    Crundall, David; Stedmon, Alex W; Crundall, Elizabeth; Saikayasit, Rossukorn

    2014-12-01

    Motorcyclists are over-represented in collision statistics. While many collisions may be the direct fault of another road user, a considerable number of fatalities and injuries are due to the actions of the rider. While increased riding experience may improve skills, advanced training courses may be required to evoke the safest riding behaviours. The current research assessed the impact of experience and advanced training on rider behaviour using a motorcycle simulator. Novice riders, experienced riders and riders with advanced training traversed a virtual world through varying speed limits and roadways of different curvature. Speed and lane position were monitored. In a comparison of 60 mph and 40 mph zones, advanced riders rode more slowly in the 40 mph zones, and had greater variation in lane position than the other two groups. In the 60 mph zones, both advanced and experienced riders had greater lane variation than novices. Across the whole ride, novices tended to position themselves closer to the kerb. In a second analysis across four classifications of curvature (straight, slight, medium, tight) advanced and experienced riders varied their lateral position more so than novices, though advanced riders had greater variation in lane position than even experienced riders in some conditions. The results suggest that experience and advanced training lead to changes in behaviour compared to novice riders which can be interpreted as having a potentially positive impact on road safety.

  16. An innovative exercise method to simulate orbital EVA work - Applications to PLSS automatic controls

    NASA Technical Reports Server (NTRS)

    Lantz, Renee; Vykukal, H.; Webbon, Bruce

    1987-01-01

    An exercise method has been proposed which may satisfy the current need for a laboratory simulation representative of muscular, cardiovascular, respiratory, and thermoregulatory responses to work during orbital extravehicular activity (EVA). The simulation incorporates arm crank ergometry with a unique body support mechanism that allows all body position stabilization forces to be reacted at the feet. By instituting this exercise method in laboratory experimentation, an advanced portable life support system (PLSS) thermoregulatory control system can be designed to more accurately reflect the specific work requirements of orbital EVA.

  17. ADVANCED UTILITY SIMULATION MODEL, DESCRIPTION OF THE NATIONAL LOOP (VERSION 3.0)

    EPA Science Inventory

    The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...

  18. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628

  19. Constraint methods that accelerate free-energy simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-01

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  20. Constraint methods that accelerate free-energy simulations of biomolecules

    SciTech Connect

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  1. Using Process/CFD Co-Simulation for the Design and Analysis of Advanced Energy Systems

    SciTech Connect

    Zitney, S.E.

    2007-04-01

    In this presentation we describe the major features and capabilities of NETL’s Advanced Process Engineering Co-Simulator (APECS) and highlight its application to advanced energy systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based electricity and hydrogen plant in the DOE’s $1 billion, 10-year FutureGen demonstration project. APECS is an integrated software suite which allows the process and energy industries to optimize overall plant performance with respect to complex thermal and fluid flow phenomena by combining process simulation (e.g., Aspen Plus®) with high-fidelity equipment simulations based on computational fluid dynamics (CFD) models (e.g., FLUENT®).

  2. Development of a VOR/DME model for an advanced concepts simulator

    NASA Technical Reports Server (NTRS)

    Steinmetz, G. G.; Bowles, R. L.

    1984-01-01

    The report presents a definition of a VOR/DME, airborne and ground systems simulation model. This description was drafted in response to a need in the creation of an advanced concepts simulation in which flight station design for the 1980 era can be postulated and examined. The simulation model described herein provides a reasonable representation of VOR/DME station in the continental United States including area coverage by type and noise errors. The detail in which the model has been cast provides the interested researcher with a moderate fidelity level simulator tool for conducting research and evaluation of navigator algorithms. Assumptions made within the development are listed and place certain responsibilities (data bases, communication with other simulation modules, uniform round earth, etc.) upon the researcher.

  3. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  4. Kinetic Method for Hydrogen-Deuterium-Tritium Mixture Distillation Simulation

    SciTech Connect

    Sazonov, A.B.; Kagramanov, Z.G.; Magomedbekov, E.P.

    2005-07-15

    Simulation of hydrogen distillation plants requires mathematical procedures suitable for multicomponent systems. In most of the present-day simulation methods a distillation column is assumed to be composed of theoretical stages, or plates. However, in the case of a multicomponent mixture theoretical plate does not exist.An alternative kinetic method of simulation is depicted in the work. According to this method a system of mass-transfer differential equations is solved numerically. Mass-transfer coefficients are estimated with using experimental results and empirical equations.Developed method allows calculating the steady state of a distillation column as well as its any non-steady state when initial conditions are given. The results for steady states are compared with ones obtained via Thiele-Geddes theoretical stage technique and the necessity of using kinetic method is demonstrated. Examples of a column startup period and periodic distillation simulations are shown as well.

  5. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  6. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  7. Comparative Assessment of Advanced Gay Hydrate Production Methods

    SciTech Connect

    M. D. White; B. P. McGrail; S. K. Wurstner

    2009-06-30

    Displacing natural gas and petroleum with carbon dioxide is a proven technology for producing conventional geologic hydrocarbon reservoirs, and producing additional yields from abandoned or partially produced petroleum reservoirs. Extending this concept to natural gas hydrate production offers the potential to enhance gas hydrate recovery with concomitant permanent geologic sequestration. Numerical simulation was used to assess a suite of carbon dioxide injection techniques for producing gas hydrates from a variety of geologic deposit types. Secondary hydrate formation was found to inhibit contact of the injected CO{sub 2} regardless of injectate phase state, thus diminishing the exchange rate due to pore clogging and hydrate zone bypass of the injected fluids. Additional work is needed to develop methods of artificially introducing high-permeability pathways in gas hydrate zones if injection of CO{sub 2} in either gas, liquid, or micro-emulsion form is to be more effective in enhancing gas hydrate production rates.

  8. Modelling and Simulation as a Recognizing Method in Education

    ERIC Educational Resources Information Center

    Stoffa, Veronika

    2004-01-01

    Computer animation-simulation models of complex processes and events, which are the method of instruction, can be an effective didactic device. Gaining deeper knowledge about objects modelled helps to plan simulation experiments oriented on processes and events researched. Animation experiments realized on multimedia computers can aid easier…

  9. The role of numerical simulation for the development of an advanced HIFU system

    NASA Astrophysics Data System (ADS)

    Okita, Kohei; Narumi, Ryuta; Azuma, Takashi; Takagi, Shu; Matumoto, Yoichiro

    2014-10-01

    High-intensity focused ultrasound (HIFU) has been used clinically and is under clinical trials to treat various diseases. An advanced HIFU system employs ultrasound techniques for guidance during HIFU treatment instead of magnetic resonance imaging in current HIFU systems. A HIFU beam imaging for monitoring the HIFU beam and a localized motion imaging for treatment validation of tissue are introduced briefly as the real-time ultrasound monitoring techniques. Numerical simulations have a great impact on the development of real-time ultrasound monitoring as well as the improvement of the safety and efficacy of treatment in advanced HIFU systems. A HIFU simulator was developed to reproduce ultrasound propagation through the body in consideration of the elasticity of tissue, and was validated by comparison with in vitro experiments in which the ultrasound emitted from the phased-array transducer propagates through the acrylic plate acting as a bone phantom. As the result, the defocus and distortion of the ultrasound propagating through the acrylic plate in the simulation quantitatively agree with that in the experimental results. Therefore, the HIFU simulator accurately reproduces the ultrasound propagation through the medium whose shape and physical properties are well known. In addition, it is experimentally confirmed that simulation-assisted focus control of the phased-array transducer enables efficient assignment of the focus to the target. Simulation-assisted focus control can contribute to design of transducers and treatment planning.

  10. The Osseus platform: a prototype for advanced web-based distributed simulation

    NASA Astrophysics Data System (ADS)

    Franceschini, Derrick; Riecken, Mark

    2016-05-01

    Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.

  11. Underwater Photosynthesis of Submerged Plants – Recent Advances and Methods

    PubMed Central

    Pedersen, Ole; Colmer, Timothy D.; Sand-Jensen, Kaj

    2013-01-01

    We describe the general background and the recent advances in research on underwater photosynthesis of leaf segments, whole communities, and plant dominated aquatic ecosystems and present contemporary methods tailor made to quantify photosynthesis and carbon fixation under water. The majority of studies of aquatic photosynthesis have been carried out with detached leaves or thalli and this selectiveness influences the perception of the regulation of aquatic photosynthesis. We thus recommend assessing the influence of inorganic carbon and temperature on natural aquatic communities of variable density in addition to studying detached leaves in the scenarios of rising CO2 and temperature. Moreover, a growing number of researchers are interested in tolerance of terrestrial plants during flooding as torrential rains sometimes result in overland floods that inundate terrestrial plants. We propose to undertake studies to elucidate the importance of leaf acclimation of terrestrial plants to facilitate gas exchange and light utilization under water as these acclimations influence underwater photosynthesis as well as internal aeration of plant tissues during submergence. PMID:23734154

  12. Quantifying hydrate solidification front advancing using method of characteristics

    NASA Astrophysics Data System (ADS)

    You, Kehua; DiCarlo, David; Flemings, Peter B.

    2015-10-01

    We develop a one-dimensional analytical solution based on the method of characteristics to explore hydrate formation from gas injection into brine-saturated sediments within the hydrate stability zone. Our solution includes fully coupled multiphase and multicomponent flow and the associated advective transport in a homogeneous system. Our solution shows that hydrate saturation is controlled by the initial thermodynamic state of the system and changed by the gas fractional flow. Hydrate saturation in gas-rich systems can be estimated by 1-cl0/cle when Darcy flow dominates, where cl0 is the initial mass fraction of salt in brine, and cle is the mass fraction of salt in brine at three-phase (gas, liquid, and hydrate) equilibrium. Hydrate saturation is constant, gas saturation and gas flux decrease, and liquid saturation and liquid flux increase with the distance from the gas inlet to the hydrate solidification front. The total gas and liquid flux is constant from the gas inlet to the hydrate solidification front and decreases abruptly at the hydrate solidification front due to gas inclusion into the hydrate phase. The advancing velocity of the hydrate solidification front decreases with hydrate saturation at a fixed gas inflow rate. This analytical solution illuminates how hydrate is formed by gas injection (methane, CO2, ethane, propane) at both the laboratory and field scales.

  13. Regenerative medicine: advances in new methods and technologies.

    PubMed

    Park, Dong-Hyuk; Eve, David J

    2009-11-01

    The articles published in the journal Cell Transplantation - The Regenerative Medicine Journal over the last two years reveal the recent and future cutting-edge research in the fields of regenerative and transplantation medicine. 437 articles were published from 2007 to 2008, a 17% increase compared to the 373 articles in 2006-2007. Neuroscience was still the most common section in both the number of articles and the percentage of all manuscripts published. The increasing interest and rapid advance in bioengineering technology is highlighted by tissue engineering and bioartificial organs being ranked second again. For a similar reason, the methods and new technologies section increased significantly compared to the last period. Articles focusing on the transplantation of stem cell lineages encompassed almost 20% of all articles published. By contrast, the non-stem cell transplantation group which is made up primarily of islet cells, followed by biomaterials and fetal neural tissue, etc. comprised less than 15%. Transplantation of cells pre-treated with medicine or gene transfection to prolong graft survival or promote differentiation into the needed phenotype, was prevalent in the transplantation articles regardless of the kind of cells used. Meanwhile, the majority of non-transplantation-based articles were related to new devices for various purposes, characterization of unknown cells, medicines, cell preparation and/or optimization for transplantation (e.g. isolation and culture), and disease pathology.

  14. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  15. Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Johnson, Arthur R.

    2003-01-01

    Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.

  16. Advanced diagnostic methods in oral and maxillofacial pathology. Part II: immunohistochemical and immunofluorescent methods.

    PubMed

    Jordan, Richard C K; Daniels, Troy E; Greenspan, John S; Regezi, Joseph A

    2002-01-01

    The practice of pathology is currently undergoing significant change, in large part due to advances in the analysis of DNA, RNA, and proteins in tissues. These advances have permitted improved biologic insights into many developmental, inflammatory, metabolic, infectious, and neoplastic diseases. Moreover, molecular analysis has also led to improvements in the accuracy of disease diagnosis and classification. It is likely that, in the future, these methods will increasingly enter into the day-to-day diagnosis and management of patients. The pathologist will continue to play a fundamental role in diagnosis and will likely be in a pivotal position to guide the implementation and interpretation of these tests as they move from the research laboratory into diagnostic pathology. The purpose of this 2-part series is to provide an overview of the principles and applications of current molecular biologic and immunologic tests. In Part I, the biologic fundamentals of DNA, RNA, and proteins and methods that are currently available or likely to become available to the pathologist in the next several years for their isolation and analysis in tissue biopsies were discussed. In Part II, advances in immunohistochemistry and immunofluorescence methods and their application to modern diagnostic pathology are reviewed. PMID:11805778

  17. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    SciTech Connect

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  18. In-silico simulations of advanced drug delivery systems: what will the future offer?

    PubMed

    Siepmann, Juergen

    2013-09-15

    This commentary enlarges on some of the topics addressed in the Position Paper "Towards more effective advanced drug delivery systems" by Crommelin and Florence (2013). Inter alia, the role of mathematical modeling and computer-assisted device design is briefly addressed in the Position Paper. This emerging and particularly promising field is considered in more depth in this commentary. In fact, in-silico simulations have become of fundamental importance in numerous scientific and related domains, allowing for a better understanding of various phenomena and for facilitated device design. The development of novel prototypes of space shuttles, nuclear power plants and automobiles are just a few examples. In-silico simulations are nowadays also well established in the field of pharmacokinetics/pharmacodynamics (PK/PD) and have become an integral part of the discovery and development process of novel drug products. Since Takeru Higuchi published his seminal equation in 1961 the use of mathematical models for the analysis and optimization of drug delivery systems in vitro has also become more and more popular. However, applying in-silico simulations for facilitated optimization of advanced drug delivery systems is not yet common practice. One of the reasons is the gap between in vitro and in vivo (PK/PD) simulations. In the future it can be expected that this gap will be closed and that computer assisted device design will play a central role in the research on, and development of advanced drug delivery systems.

  19. Sensitivity analysis of infectious disease models: methods, advances and their application.

    PubMed

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V

    2013-09-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods-scatter plots, the Morris and Sobol' methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method-and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  20. Development and Applications of Advanced Electronic Structure Methods

    NASA Astrophysics Data System (ADS)

    Bell, Franziska

    This dissertation contributes to three different areas in electronic structure theory. The first part of this thesis advances the fundamentals of orbital active spaces. Orbital active spaces are not only essential in multi-reference approaches, but have also become of interest in single-reference methods as they allow otherwise intractably large systems to be studied. However, despite their great importance, the optimal choice and, more importantly, their physical significance are still not fully understood. In order to address this problem, we studied the higher-order singular value decomposition (HOSVD) in the context of electronic structure methods. We were able to gain a physical understanding of the resulting orbitals and proved a connection to unrelaxed natural orbitals in the case of Moller-Plesset perturbation theory to second order (MP2). In the quest to find the optimal choice of the active space, we proposed a HOSVD for energy-weighted integrals, which yielded the fastest convergence in MP2 correlation energy for small- to medium-sized active spaces to date, and is also potentially transferable to coupled-cluster theory. In the second part, we studied monomeric and dimeric glycerol radical cations and their photo-induced dissociation in collaboration with Prof. Leone and his group. Understanding the mechanistic details involved in these processes are essential for further studies on the combustion of glycerol and carbohydrates. To our surprise, we found that in most cases, the experimentally observed appearance energies arise from the separation of product fragments from one another rather than rearrangement to products. The final chapters of this work focus on the development, assessment, and application of the spin-flip method, which is a single-reference approach, but capable of describing multi-reference problems. Systems exhibiting multi-reference character, which arises from the (near-) degeneracy of orbital energies, are amongst the most

  1. Recent advances in large-eddy simulation of spray and coal combustion

    NASA Astrophysics Data System (ADS)

    Zhou, L. X.

    2013-07-01

    Large-eddy simulation (LES) is under its rapid development and is recognized as a possible second generation of CFD methods used in engineering. Spray and coal combustion is widely used in power, transportation, chemical and metallurgical, iron and steel making, aeronautical and astronautical engineering, hence LES of spray and coal two-phase combustion is particularly important for engineering application. LES of two-phase combustion attracts more and more attention; since it can give the detailed instantaneous flow and flame structures and more exact statistical results than those given by the Reynolds averaged modeling (RANS modeling). One of the key problems in LES is to develop sub-grid scale (SGS) models, including SGS stress models and combustion models. Different investigators proposed or adopted various SGS models. In this paper the present author attempts to review the advances in studies on LES of spray and coal combustion, including the studies done by the present author and his colleagues. Different SGS models adopted by different investigators are described, some of their main results are summarized, and finally some research needs are discussed.

  2. Advancing in-situ modeling of ICMEs: Insights from remote observations and simulations

    NASA Astrophysics Data System (ADS)

    Jensen, E. A.; Mulligan, T.; Reinard, A. A.; Lynch, B. J.

    2011-12-01

    One of the underlying problems in the investigation of CME genesis and evolution is relating remote- sensing observations of coronal mass ejections (CMEs) to in-situ observations of interplanetary CMEs (ICMEs). Typically, the global structure of a CME projected onto the plane of the sky is obtained through remote-sensing, while local, yet highly-quantitative measurements of an ICME are made in situ along a spacecraft trajectory. Modeling the structure of these observations at the Sun and in situ has begun to bridge the gap between these vastly different types of observations, yet there is still a long way to go. Remote sensing observations and MHD simulations indicate we need to understand ICMEs in their entirety, including the various internal substructures in order to make comparisons between line-of-sight and in situ observations. This requires advancing ICME modeling beyond the flux rope boundaries. We have addressed this difficulty by developing a Delaunay triangulation method to combine multispacecraft in-situ observations to infer a more global structure of ICMEs in the plane of the spacecraft observations. We present a description of these techniques and a comparison with data.

  3. Numerical simulation of thermal discharge based on FVM method

    NASA Astrophysics Data System (ADS)

    Yu, Yunli; Wang, Deguan; Wang, Zhigang; Lai, Xijun

    2006-01-01

    A two-dimensional numerical model is proposed to simulate the thermal discharge from a power plant in Jiangsu Province. The equations in the model consist of two-dimensional non-steady shallow water equations and thermal waste transport equations. Finite volume method (FVM) is used to discretize the shallow water equations, and flux difference splitting (FDS) scheme is applied. The calculated area with the same temperature increment shows the effect of thermal discharge on sea water. A comparison between simulated results and the experimental data shows good agreement. It indicates that this method can give high precision in the heat transfer simulation in coastal areas.

  4. Advanced simulation technology for etching process design for CMOS device applications

    NASA Astrophysics Data System (ADS)

    Kuboi, Nobuyuki; Fukasawa, Masanaga; Tatsumi, Tetsuya

    2016-07-01

    Plasma etching is a critical process for the realization of high performance in the next generation of CMOS devices. To predict and control fluctuations in the etching properties accurately during mass production, it is essential that etching process simulation technology considers fluctuations in the plasma chamber wall conditions, the effects of by-products on the critical dimensions, the Si recess dependence on the wafer open area ratio and local pattern structure, and the time-dependent plasma-induced damage distribution associated with the three-dimensional feature scale profile at the 100 nm level. This consideration can overcome the issues with conventional simulations performed under the assumed ideal conditions, which are not accurate enough for practical process design. In this article, these advanced process simulation technologies are reviewed, and, from the results of suitable process simulations, a new etching system that automatically controls the etching properties is proposed to enable stable CMOS device fabrication with high yields.

  5. Advances in POST2 End-to-End Descent and Landing Simulation for the ALHAT Project

    NASA Technical Reports Server (NTRS)

    Davis, Jody L.; Striepe, Scott A.; Maddock, Robert W.; Hines, Glenn D.; Paschall, Stephen, II; Cohanim, Babak E.; Fill, Thomas; Johnson, Michael C.; Bishop, Robert H.; DeMars, Kyle J.; Sostaric, Ronald r.; Johnson, Andrew E.

    2008-01-01

    Program to Optimize Simulated Trajectories II (POST2) is used as a basis for an end-to-end descent and landing trajectory simulation that is essential in determining design and integration capability and system performance of the lunar descent and landing system and environment models for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. The POST2 simulation provides a six degree-of-freedom capability necessary to test, design and operate a descent and landing system for successful lunar landing. This paper presents advances in the development and model-implementation of the POST2 simulation, as well as preliminary system performance analysis, used for the testing and evaluation of ALHAT project system models.

  6. Integrating advanced materials simulation techniques into an automated data analysis workflow at the Spallation Neutron Source

    SciTech Connect

    Borreguero Calvo, Jose M; Campbell, Stuart I; Delaire, Olivier A; Doucet, Mathieu; Goswami, Monojoy; Hagen, Mark E; Lynch, Vickie E; Proffen, Thomas E; Ren, Shelly; Savici, Andrei T; Sumpter, Bobby G

    2014-01-01

    This presentation will review developments on the integration of advanced modeling and simulation techniques into the analysis step of experimental data obtained at the Spallation Neutron Source. A workflow framework for the purpose of refining molecular mechanics force-fields against quasi-elastic neutron scattering data is presented. The workflow combines software components to submit model simulations to remote high performance computers, a message broker interface for communications between the optimizer engine and the simulation production step, and tools to convolve the simulated data with the experimental resolution. A test application shows the correction to a popular fixed-charge water model in order to account polarization effects due to the presence of solvated ions. Future enhancements to the refinement workflow are discussed. This work is funded through the DOE Center for Accelerating Materials Modeling.

  7. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.; Schifer, Nicholas A.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including testing validation hardware, known as the Thermal Standard, to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. This validation hardware provided a comparison for scrutinizing and improving empirical correlations and numerical models of ASC-E2 net heat input. This hardware simulated the characteristics of an ASC-E2 convertor in both an operating and non-operating mode. This paper describes the Thermal Standard testing and the conclusions of the validation effort applied to the empirical correlation methods used by the Radioisotope Power System (RPS) team at NASA Glenn.

  8. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  9. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H., III; Gilinsky, Mikhail M.

    2004-01-01

    In this project on the first stage (2000-Ol), we continued to develop the previous joint research between the Fluid Mechanics and Acoustics Laboratory (FM&AL) at Hampton University (HU) and the Jet Noise Team (JNT) at the NASA Langley Research Center (NASA LaRC). At the second stage (2001-03), FM&AL team concentrated its efforts on solving of problems of interest to Glenn Research Center (NASA GRC), especially in the field of propulsion system enhancement. The NASA GRC R&D Directorate and LaRC Hyper-X Program specialists in a hypersonic technology jointly with the FM&AL staff conducted research on a wide region of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines. The last year the Hampton University School of Engineering & Technology was awarded the NASA grant, for creation of the Aeropropulsion Center, and the FM&AL is a key team of the project fulfillment responsible for research in Aeropropulsion and Acoustics (Pillar I). This work is supported by joint research between the NASA GRC/ FM&AL and the Institute of Mechanics at Moscow State University (IMMSU) in Russia under a CRDF grant. The main areas of current scientific interest of the FM&AL include an investigation of the proposed and patented advanced methods for aircraft engine thrust and noise benefits. This is the main subject of our other projects, of which one is presented. The last year we concentrated our efforts to analyze three main problems: (a) new effective methods fuel injection into the flow stream in air-breathing engines; (b) new re-circulation method for mixing, heat transfer and combustion enhancement in propulsion systems and domestic industry application; (c) covexity flow The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines (see, for

  10. Bioinformatics Methods and Tools to Advance Clinical Care

    PubMed Central

    Lecroq, T.

    2015-01-01

    Summary Objectives To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain and clinical care. Method We provide a synopsis of the articles selected for the IMIA Yearbook 2015, from which we attempt to derive a synthetic overview of current and future activities in the field. As last year, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor has evaluated separately the set of 1,594 articles and the evaluation results were merged for retaining 15 articles for peer-review. Results The selection and evaluation process of this Yearbook’s section on Bioinformatics and Translational Informatics yielded four excellent articles regarding data management and genome medicine that are mainly tool-based papers. In the first article, the authors present PPISURV a tool for uncovering the role of specific genes in cancer survival outcome. The second article describes the classifier PredictSNP which combines six performing tools for predicting disease-related mutations. In the third article, by presenting a high-coverage map of the human proteome using high resolution mass spectrometry, the authors highlight the need for using mass spectrometry to complement genome annotation. The fourth article is also related to patient survival and decision support. The authors present datamining methods of large-scale datasets of past transplants. The objective is to identify chances of survival. Conclusions The current research activities still attest the continuous convergence of Bioinformatics and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care. Indeed, there is a need for powerful tools for managing and interpreting complex, large-scale genomic and biological datasets, but also a need for user-friendly tools developed for the clinicians in their

  11. A tool for simulating parallel branch-and-bound methods

    NASA Astrophysics Data System (ADS)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  12. CAPE-OPEN Integration for Advanced Process Engineering Co-Simulation

    SciTech Connect

    Zitney, S.E.

    2006-11-01

    This paper highlights the use of the CAPE-OPEN (CO) standard interfaces in the Advanced Process Engineering Co-Simulator (APECS) developed at the National Energy Technology Laboratory (NETL). The APECS system uses the CO unit operation, thermodynamic, and reaction interfaces to provide its plug-and-play co-simulation capabilities, including the integration of process simulation with computational fluid dynamics (CFD) simulation. APECS also relies heavily on the use of a CO COM/CORBA bridge for running process/CFD co-simulations on multiple operating systems. For process optimization in the face of multiple and some time conflicting objectives, APECS offers stochastic modeling and multi-objective optimization capabilities developed to comply with the CO software standard. At NETL, system analysts are applying APECS to a wide variety of advanced power generation systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based FutureGen power and hydrogen production plant.

  13. Development and integration of the Army's Advanced Multispectral Simulation Test Acceptance Resource (AMSTAR) HWIL facilities

    NASA Astrophysics Data System (ADS)

    LeSueur, Kenneth G.; Lowry, William; Morris, Joe

    2006-05-01

    The Advanced Multispectral Simulation Test Acceptance Resource (AMSTAR) is a suite of state-of-the-art hardware-in-the-loop (HWIL) simulation / test capabilities designed to meet the life-cycle testing needs of multi-spectral systems. This paper presents the major AMSTAR facility design concepts and each of the Millimeter Wave (MMW), Infrared (IR), and Semi-Active Laser (SAL) in-band scene generation and projection system designs. The emergence of Multispectral sensors in missile systems necessitates capabilities such as AMSTAR to simultaneous project MMW, IR, and SAL wave bands into a common sensor aperture.

  14. Development and integration of the Army's advanced multispectral simulation test acceptance resource (AMSTAR) HWIL facilities

    NASA Astrophysics Data System (ADS)

    LeSueur, Kenneth G.; Lowry, William; Morris, Joe

    2005-05-01

    The Advanced Multispectral Simulation Test Acceptance Resource (AMSTAR) is a suite of state-of-the-art Hardware-In-the-Loop (HWIL) simulation / test capabilities designed to meet the life-cycle testing needs of multi-spectral systems. This paper presents the major AMSTAR facility design concepts and each of the Millimeter Wave (MMW), Infrared (IR), and Semi-Active Laser (SAL) in-band scene generation and projection system designs. The emergence of Multispectral sensors in missile systems necessitates capabilities such as AMSTAR to simultaneous project MMW, IR, and SAL wave bands into a common sensor aperture.

  15. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  16. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    SciTech Connect

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  17. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  18. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Carnes, B

    2009-06-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  19. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    SciTech Connect

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  20. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    SciTech Connect

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  1. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  2. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  3. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    SciTech Connect

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  4. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  5. Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David

    1995-01-01

    Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.

  6. Vibratory compaction method for preparing lunar regolith drilling simulant

    NASA Astrophysics Data System (ADS)

    Chen, Chongbin; Quan, Qiquan; Deng, Zongquan; Jiang, Shengyuan

    2016-07-01

    Drilling and coring is an effective way to acquire lunar regolith samples along the depth direction. To facilitate the modeling and simulation of lunar drilling, ground verification experiments for drilling and coring should be performed using lunar regolith simulant. The simulant should mimic actual lunar regolith, and the distribution of its mechanical properties should vary along the longitudinal direction. Furthermore, an appropriate preparation method is required to ensure that the simulant has consistent mechanical properties so that the experimental results can be repeatable. Vibratory compaction actively changes the relative density of a raw material, making it suitable for building a multilayered drilling simulant. It is necessary to determine the relation between the preparation parameters and the expected mechanical properties of the drilling simulant. A vibratory compaction model based on the ideal elastoplastic theory is built to represent the dynamical properties of the simulant during compaction. Preparation experiments indicated that the preparation method can be used to obtain drilling simulant with the desired mechanical property distribution along the depth direction.

  7. Stress trajectory and advanced hydraulic-fracture simulations for the Eastern Gas Shales Project. Final report, April 30, 1981-July 30, 1983

    SciTech Connect

    Advani, S.H.; Lee, J.K.

    1983-01-01

    A summary review of hydraulic fracture modeling is given. Advanced hydraulic fracture model formulations and simulation, using the finite element method, are presented. The numerical examples include the determination of fracture width, height, length, and stress intensity factors with the effects of frac fluid properties, layered strata, in situ stresses, and joints. Future model extensions are also recommended. 66 references, 23 figures.

  8. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    SciTech Connect

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-29

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  9. Processing of alnico permanent magnets by advanced directional solidification methods

    DOE PAGES

    Zou, Min; Johnson, Francis; Zhang, Wanming; Zhao, Qi; Rutkowski, Stephen F.; Zhou, Lin; Kramer, Matthew J.

    2016-07-05

    Advanced directional solidification methods have been used to produce large (>15 cm length) castings of Alnico permanent magnets with highly oriented columnar microstructures. In combination with subsequent thermomagnetic and draw thermal treatment, this method was used to enable the high coercivity, high-Titanium Alnico composition of 39% Co, 29.5% Fe, 14% Ni, 7.5% Ti, 7% Al, 3% Cu (wt%) to have an intrinsic coercivity (Hci) of 2.0 kOe, a remanence (Br) of 10.2 kG, and an energy product (BH)max of 10.9 MGOe. These properties compare favorably to typical properties for the commercial Alnico 9. Directional solidification of higher Ti compositions yieldedmore » anisotropic columnar grained microstructures if high heat extraction rates through the mold surface of at least 200 kW/m2 were attained. This was achieved through the use of a thin walled (5 mm thick) high thermal conductivity SiC shell mold extracted from a molten Sn bath at a withdrawal rate of at least 200 mm/h. However, higher Ti compositions did not result in further increases in magnet performance. Images of the microstructures collected by scanning electron microscopy (SEM) reveal a majority α phase with inclusions of secondary αγ phase. Transmission electron microscopy (TEM) reveals that the α phase has a spinodally decomposed microstructure of FeCo-rich needles in a NiAl-rich matrix. In the 7.5% Ti composition the diameter distribution of the FeCo needles was bimodal with the majority having diameters of approximately 50 nm with a small fraction having diameters of approximately 10 nm. The needles formed a mosaic pattern and were elongated along one <001> crystal direction (parallel to the field used during magnetic annealing). Cu precipitates were observed between the needles. Regions of abnormal spinodal morphology appeared to correlate with secondary phase precipitates. The presence of these abnormalities did not prevent the material from displaying superior magnetic properties in the 7.5% Ti

  10. Processing of alnico permanent magnets by advanced directional solidification methods

    NASA Astrophysics Data System (ADS)

    Zou, Min; Johnson, Francis; Zhang, Wanming; Zhao, Qi; Rutkowski, Stephen F.; Zhou, Lin; Kramer, Matthew J.

    2016-12-01

    Advanced directional solidification methods have been used to produce large (>15 cm length) castings of Alnico permanent magnets with highly oriented columnar microstructures. In combination with subsequent thermomagnetic and draw thermal treatment, this method was used to enable the high coercivity, high-Titanium Alnico composition of 39% Co, 29.5% Fe, 14% Ni, 7.5% Ti, 7% Al, 3% Cu (wt%) to have an intrinsic coercivity (Hci) of 2.0 kOe, a remanence (Br) of 10.2 kG, and an energy product (BH)max of 10.9 MGOe. These properties compare favorably to typical properties for the commercial Alnico 9. Directional solidification of higher Ti compositions yielded anisotropic columnar grained microstructures if high heat extraction rates through the mold surface of at least 200 kW/m2 were attained. This was achieved through the use of a thin walled (5 mm thick) high thermal conductivity SiC shell mold extracted from a molten Sn bath at a withdrawal rate of at least 200 mm/h. However, higher Ti compositions did not result in further increases in magnet performance. Images of the microstructures collected by scanning electron microscopy (SEM) reveal a majority α phase with inclusions of secondary αγ phase. Transmission electron microscopy (TEM) reveals that the α phase has a spinodally decomposed microstructure of FeCo-rich needles in a NiAl-rich matrix. In the 7.5% Ti composition the diameter distribution of the FeCo needles was bimodal with the majority having diameters of approximately 50 nm with a small fraction having diameters of approximately 10 nm. The needles formed a mosaic pattern and were elongated along one <001> crystal direction (parallel to the field used during magnetic annealing). Cu precipitates were observed between the needles. Regions of abnormal spinodal morphology appeared to correlate with secondary phase precipitates. The presence of these abnormalities did not prevent the material from displaying superior magnetic properties in the 7.5% Ti

  11. Comparison of EBSD patterns simulated by two multislice methods.

    PubMed

    Liu, Q B; Cai, C Y; Zhou, G W; Wang, Y G

    2016-10-01

    The extraction of crystallography information from electron backscatter diffraction (EBSD) patterns can be facilitated by diffraction simulations based on the dynamical electron diffraction theory. In this work, the EBSD patterns are successfully simulated by two multislice methods, that is, the real space (RS) method and the revised real space (RRS) method. The calculation results by the two multislice methods are compared and analyzed in detail with respect to different accelerating voltages, Debye-Waller factors and aperture radii. It is found that the RRS method provides a larger view field of the EBSD patterns than that by the RS method under the same calculation conditions. Moreover, the Kikuchi bands of the EBSD patterns obtained by the RRS method have a better match with the experimental patterns than those by the RS method. Especially, the lattice parameters obtained by the RRS method are more accurate than those by the RS method. These results demonstrate that the RRS method is more accurate for simulating the EBSD patterns than the RS method within the accepted computation time.

  12. Advancing lighting and daylighting simulation: The transition from analysis to design aid tools

    SciTech Connect

    Hitchcock, R.J.

    1995-05-01

    This paper explores three significant software development requirements for making the transition from stand-alone lighting simulation/analysis tools to simulation-based design aid tools. These requirements include specialized lighting simulation engines, facilitated methods for creating detailed simulatable building descriptions, an automated techniques for providing lighting design guidance. Initial computer implementations meant to address each of these requirements are discussed to further elaborate these requirements and to illustrate work-in-progress.

  13. An advanced synthetic eddy method for the computation of aerofoil-turbulence interaction noise

    NASA Astrophysics Data System (ADS)

    Kim, Jae Wook; Haeri, Sina

    2015-04-01

    This paper presents an advanced method to synthetically generate flow turbulence via an inflow boundary condition particularly designed for three-dimensional aeroacoustic simulations. The proposed method is virtually free of spurious noise that might arise from the synthetic turbulence, which enables a direct calculation of propagated sound waves from the source mechanism. The present work stemmed from one of the latest outcomes of synthetic eddy method (SEM) derived from a well-defined vector potential function creating a divergence-free velocity field with correct convection speeds of eddies, which in theory suppresses pressure fluctuations. In this paper, a substantial extension of the SEM is introduced and systematically optimised to create a realistic turbulence field based on von Kármán velocity spectra. The optimised SEM is then combined with a well-established sponge-layer technique to quietly inject the turbulent eddies into the domain from the upstream boundary, which results in a sufficiently clean acoustic field. Major advantages in the present approach are: a) that genuinely three-dimensional turbulence is generated; b) that various ways of parametrisation can be created to control/characterise the randomly distributed eddies; and, c) that its numerical implementation is efficient as the size of domain section through which the turbulent eddies should be passing can be adjusted and minimised. The performance and reliability of the proposed SEM are demonstrated by a three-dimensional simulation of aerofoil-turbulence interaction noise.

  14. Advanced Simulation in Undergraduate Pilot Training: Automatic Instructional System. Final Report for the Period March 1971-January 1975.

    ERIC Educational Resources Information Center

    Faconti, Victor; Epps, Robert

    The Advanced Simulator for Undergraduate Pilot Training (ASUPT) was designed to investigate the role of simulation in the future Undergraduate Pilot Training (UPT) program. The Automated Instructional System designed for the ASUPT simulator was described in this report. The development of the Automated Instructional System for ASUPT was based upon…

  15. Profile Evolution Simulation in Etching Systems Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Govindan, T. R.; Meyyappan, M.

    1998-01-01

    Semiconductor device profiles are determined by the characteristics of both etching and deposition processes. In particular, a highly anisotropic etch is required to achieve vertical sidewalls. However, etching is comprised of both anisotropic and isotropic components, due to ion and neutral fluxes, respectively. In Ar/Cl2 plasmas, for example, neutral chlorine reacts with the Si surfaces to form silicon chlorides. These compounds are then removed by the impinging ion fluxes. Hence the directionality of the ions (and thus the ion angular distribution function, or IAD), as well as the relative fluxes of neutrals and ions determines the amount of undercutting. One method of modeling device profile evolution is to simulate the moving solid-gas interface between the semiconductor and the plasma as a string of nodes. The velocity of each node is calculated and then the nodes are advanced accordingly. Although this technique appears to be relatively straightforward, extensive looping schemes are required at the profile corners. An alternate method is to use level set theory, which involves embedding the location of the interface in a field variable. The normal speed is calculated at each mesh point, and the field variable is updated. The profile comers are more accurately modeled as the need for looping algorithms is eliminated. The model we have developed is a 2-D Level Set Profile Evolution Simulation (LSPES). The LSPES calculates etch rates of a substrate in low pressure plasmas due to the incident ion and neutral fluxes. For a Si substrate in an Ar/C12 gas mixture, for example, the predictions of the LSPES are identical to those from a string evolution model for high neutral fluxes and two different ion angular distributions.(2) In the figure shown, the relative neutral to ion flux in the bulk plasma is 100 to 1. For a moderately isotropic ion angular distribution function as shown in the cases in the left hand column, both the LSPES (top row) and rude's string

  16. An experimental method for the assessment of color simulation tools.

    PubMed

    Lillo, Julio; Alvaro, Leticia; Moreira, Humberto

    2014-07-22

    The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method.

  17. Simulating rotationally inelastic collisions using a direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Schullian, O.; Loreau, J.; Vaeck, N.; van der Avoird, A.; Heazlewood, B. R.; Rennick, C. J.; Softley, T. P.

    2015-12-01

    A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH3 by He.

  18. A numerical investigation on the efficiency of range extending systems using Advanced Vehicle Simulator

    NASA Astrophysics Data System (ADS)

    Varnhagen, Scott; Same, Adam; Remillard, Jesse; Park, Jae Wan

    2011-03-01

    Series plug-in hybrid electric vehicles of varying engine configuration and battery capacity are modeled using Advanced Vehicle Simulator (ADVISOR). The performance of these vehicles is analyzed on the bases of energy consumption and greenhouse gas emissions on the tank-to-wheel and well-to-wheel paths. Both city and highway driving conditions are considered during the simulation. When simulated on the well-to-wheel path, it is shown that the range extender with a Wankel rotary engine consumes less energy and emits fewer greenhouse gases compared to the other systems with reciprocating engines during many driving cycles. The rotary engine has a higher power-to-weight ratio and lower noise, vibration and harshness compared to conventional reciprocating engines, although performs less efficiently. The benefits of a Wankel engine make it an attractive option for use as a range extender in a plug-in hybrid electric vehicle.

  19. Technical Basis for Physical Fidelity of NRC Control Room Training Simulators for Advanced Reactors

    SciTech Connect

    Minsk, Brian S.; Branch, Kristi M.; Bates, Edward K.; Mitchell, Mark R.; Gore, Bryan F.; Faris, Drury K.

    2009-10-09

    The objective of this study is to determine how simulator physical fidelity influences the effectiveness of training the regulatory personnel responsible for examination and oversight of operating personnel and inspection of technical systems at nuclear power reactors. It seeks to contribute to the U.S. Nuclear Regulatory Commission’s (NRC’s) understanding of the physical fidelity requirements of training simulators. The goal of the study is to provide an analytic framework, data, and analyses that inform NRC decisions about the physical fidelity requirements of the simulators it will need to train its staff for assignment at advanced reactors. These staff are expected to come from increasingly diverse educational and experiential backgrounds.

  20. A Method for Increasing Elders' Use of Advance Directives.

    ERIC Educational Resources Information Center

    Luptak, Marilyn K.; Boult, Chad

    1994-01-01

    Studied effectiveness of intervention to help frail elders to record advance directives (ADs). In collaboration with physicians and lay volunteer, social worker provided information/counseling to elderly subjects, families, and proxies in series of visits to geriatric evaluation and management clinic. Seventy-one percent of subjects recorded ADs.…

  1. Mission simulation as an approach to develop requirements for automation in Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Eckelkamp, R. E.; Barta, D. J.; Dragg, J.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    This paper examines mission simulation as an approach to develop requirements for automation and robotics for Advanced Life Support Systems (ALSS). The focus is on requirements and applications for command and control, control and monitoring, situation assessment and response, diagnosis and recovery, adaptive planning and scheduling, and other automation applications in addition to mechanized equipment and robotics applications to reduce the excessive human labor requirements to operate and maintain an ALSS. Based on principles of systems engineering, an approach is proposed to assess requirements for automation and robotics using mission simulation tools. First, the story of a simulated mission is defined in terms of processes with attendant types of resources needed, including options for use of automation and robotic systems. Next, systems dynamics models are used in simulation to reveal the implications for selected resource allocation schemes in terms of resources required to complete operational tasks. The simulations not only help establish ALSS design criteria, but also may offer guidance to ALSS research efforts by identifying gaps in knowledge about procedures and/or biophysical processes. Simulations of a planned one-year mission with 4 crewmembers in a Human Rated Test Facility are presented as an approach to evaluation of mission feasibility and definition of automation and robotics requirements.

  2. Advanced process engineering co-simulation using CFD-based reduced order models

    SciTech Connect

    Lang, Y.-D.; Biegler, L.T.; Munteanu, S.; Madsen, J.I.; Zitney, S.E.

    2007-11-04

    The process and energy industries face the challenge of designing the next generation of plants to operate with unprecedented efficiency and near-zero emissions, while performing profitably amid fluctuations in costs for raw materials, finished products, and energy. To achieve these targets, the designers of future plants are increasingly relying upon modeling and simulation to create virtual plants that allow them to evaluate design concepts without the expense of pilot-scale and demonstration facilities. Two of the more commonly used simulation tools include process simulators for describing the entire plant as a network of simplified equipment models and computational fluid dynamic (CFD) packages for modeling an isolated equipment item in great detail by accounting for complex thermal and fluid flow phenomena. The Advanced Process Engineering Co-Simulator (APECS) sponsored by the U.S. Department of Energy’s (DOE) National Energy Technology Laboratory (NETL) has been developed to combine process simulation software with CFD-based equipment simulation software so that design engineers can analyze and optimize the coupled fluid flow, heat and mass transfer, and chemical reactions that drive overall plant performance (Zitney et al., 2006). The process/CFD software integration was accomplished using the process-industry standard CAPE-OPEN interfaces.

  3. Validation of GATE Monte Carlo simulations of the GE Advance/Discovery LS PET scanners.

    PubMed

    Schmidtlein, C Ross; Kirov, Assen S; Nehmeh, Sadek A; Erdi, Yusuf E; Humm, John L; Amols, Howard I; Bidaut, Luc M; Ganin, Alex; Stearns, Charles W; McDaniel, David L; Hamacher, Klaus A

    2006-01-01

    The recently developed GATE (GEANT4 application for tomographic emission) Monte Carlo package, designed to simulate positron emission tomography (PET) and single photon emission computed tomography (SPECT) scanners, provides the ability to model and account for the effects of photon noncollinearity, off-axis detector penetration, detector size and response, positron range, photon scatter, and patient motion on the resolution and quality of PET images. The objective of this study is to validate a model within GATE of the General Electric (GE) Advance/Discovery Light Speed (LS) PET scanner. Our three-dimensional PET simulation model of the scanner consists of 12 096 detectors grouped into blocks, which are grouped into modules as per the vendor's specifications. The GATE results are compared to experimental data obtained in accordance with the National Electrical Manufactures Association/Society of Nuclear Medicine (NEMA/SNM), NEMA NU 2-1994, and NEMA NU 2-2001 protocols. The respective phantoms are also accurately modeled thus allowing us to simulate the sensitivity, scatter fraction, count rate performance, and spatial resolution. In-house software was developed to produce and analyze sinograms from the simulated data. With our model of the GE Advance/Discovery LS PET scanner, the ratio of the sensitivities with sources radially offset 0 and 10 cm from the scanner's main axis are reproduced to within 1% of measurements. Similarly, the simulated scatter fraction for the NEMA NU 2-2001 phantom agrees to within less than 3% of measured values (the measured scatter fractions are 44.8% and 40.9 +/- 1.4% and the simulated scatter fraction is 43.5 +/- 0.3%). The simulated count rate curves were made to match the experimental curves by using deadtimes as fit parameters. This resulted in deadtime values of 625 and 332 ns at the Block and Coincidence levels, respectively. The experimental peak true count rate of 139.0 kcps and the peak activity concentration of 21.5 k

  4. Validation of GATE Monte Carlo simulations of the GE Advance/Discovery LS PET scanners

    SciTech Connect

    Schmidtlein, C. Ross; Kirov, Assen S.; Nehmeh, Sadek A.; Erdi, Yusuf E.; Humm, John L.; Amols, Howard I.; Bidaut, Luc M.; Ganin, Alex; Stearns, Charles W.; McDaniel, David L.; Hamacher, Klaus A.

    2006-01-15

    The recently developed GATE (GEANT4 application for tomographic emission) Monte Carlo package, designed to simulate positron emission tomography (PET) and single photon emission computed tomography (SPECT) scanners, provides the ability to model and account for the effects of photon noncollinearity, off-axis detector penetration, detector size and response, positron range, photon scatter, and patient motion on the resolution and quality of PET images. The objective of this study is to validate a model within GATE of the General Electric (GE) Advance/Discovery Light Speed (LS) PET scanner. Our three-dimensional PET simulation model of the scanner consists of 12 096 detectors grouped into blocks, which are grouped into modules as per the vendor's specifications. The GATE results are compared to experimental data obtained in accordance with the National Electrical Manufactures Association/Society of Nuclear Medicine (NEMA/SNM), NEMA NU 2-1994, and NEMA NU 2-2001 protocols. The respective phantoms are also accurately modeled thus allowing us to simulate the sensitivity, scatter fraction, count rate performance, and spatial resolution. In-house software was developed to produce and analyze sinograms from the simulated data. With our model of the GE Advance/Discovery LS PET scanner, the ratio of the sensitivities with sources radially offset 0 and 10 cm from the scanner's main axis are reproduced to within 1% of measurements. Similarly, the simulated scatter fraction for the NEMA NU 2-2001 phantom agrees to within less than 3% of measured values (the measured scatter fractions are 44.8% and 40.9{+-}1.4% and the simulated scatter fraction is 43.5{+-}0.3%). The simulated count rate curves were made to match the experimental curves by using deadtimes as fit parameters. This resulted in deadtime values of 625 and 332 ns at the Block and Coincidence levels, respectively. The experimental peak true count rate of 139.0 kcps and the peak activity concentration of 21.5 k

  5. A Method of Simulating Fluid Structure Interactions for Deformable Decelerators

    NASA Astrophysics Data System (ADS)

    Gidzak, Vladimyr Mykhalo

    A method is developed for performing simulations that contain fluid-structure interactions between deployable decelerators and a high speed compressible flow. The problem of coupling together multiple physical systems is examined with discussion of the strength of coupling for various methods. A non-monolithic strongly coupled option is presented for fluid-structure systems based on grid deformation. A class of algebraic grid deformation methods is then presented with examples of increasing complexity. The strength of the fluid-structure coupling is validated against two analytic problems, chosen to test the time dependent behavior of structure on fluid interactions, and of fluid on structure interruptions. A one-dimentional material heating model is also validated against experimental data. Results are provided for simulations of a wind tunnel scale disk-gap-band parachute with comparison to experimental data. Finally, a simulation is performed on a flight scale tension cone decelerator, with examination of time-dependent material stress, and heating.

  6. Replica exchange simulation method using temperature and solvent viscosity

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuong H.

    2010-04-01

    We propose an efficient and simple method for fast conformational sampling by introducing the solvent viscosity as a parameter to the conventional temperature replica exchange molecular dynamics (T-REMD) simulation method. The method, named V-REMD (V stands for viscosity), uses both low solvent viscosity and high temperature to enhance sampling for each replica; therefore it requires fewer replicas than the T-REMD method. To reduce the solvent viscosity by a factor of λ in a molecular dynamics simulation, one can simply reduce the mass of solvent molecules by a factor of λ2. This makes the method as simple as the conventional method. Moreover, thermodynamic and conformational properties of structures in replicas are still useful as long as one has sufficiently sampled the Boltzmann ensemble. The advantage of the present method has been demonstrated with the simulations of the trialanine, deca-alanine, and a 16-residue β-hairpin peptides. It shows that the method could reduce the number of replicas by a factor of 1.5 to 2 as compared with the T-REMD method.

  7. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  8. Implicit methods for efficient musculoskeletal simulation and optimal control.

    PubMed

    van den Bogert, Antonie J; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers.

  9. An improved method for simulating microcalcifications in digital mammograms

    PubMed Central

    Zanca, Federica; Chakraborty, Dev Prasad; Van Ongeval, Chantal; Jacobs, Jurgen; Claus, Filip; Marchal, Guy; Bosmans, Hilde

    2008-01-01

    The assessment of the performance of a digital mammography system requires an observer study with a relatively large number of cases with known truth which is often difficult to assemble. Several investigators have developed methods for generating hybrid abnormal images containing simulated microcalcifications. This article addresses some of the limitations of earlier methods. The new method is based on digital images of needle biopsy specimens. Since the specimens are imaged separately from the breast, the microcalcification attenuation profile scan is deduced without the effects of over and underlying tissues. The resulting templates are normalized for image acquisition specific parameters and reprocessed to simulate microcalcifications appropriate to other imaging systems, with different x-ray, detector and image processing parameters than the original acquisition system. This capability is not shared by previous simulation methods that have relied on extracting microcalcifications from breast images. The method was validated by five experienced mammographers who compared 59 pairs of simulated and real microcalcifications in a two-alternative forced choice task designed to test if they could distinguish the real from the simulated lesions. They also classified the shapes of the microcalcifications according to a standardized clinical lexicon. The observed probability of correct choice was 0.415, 95% confidence interval (0.284, 0.546), showing that the radiologists were unable to distinguish the lesions. The shape classification revealed substantial agreement with the truth (mean kappa=0.70), showing that we were able to accurately simulate the lesion morphology. While currently limited to single microcalcifications, the method is extensible to more complex clusters of microcalcifications and to three-dimensional images. It can be used to objectively assess an imaging technology, especially with respect to its ability to adequately visualize the morphology of the

  10. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  11. Finite element methods for enhanced oil recovery Simulation

    SciTech Connect

    Cohen, M.F.

    1985-02-01

    A general, finite element procedure for reservoir simulation is presented. This effort is directed toward improving the numerical behavior of standard upstream, or upwind, finite difference techniques, without significantly increasing the computational costs. Two methods from previous authors' work are modified and developed: upwind finite elements and the Petrov-Galerkin method. These techniques are applied in a one- and two-dimensional, surfactant/ polymer simulator. The paper sets forth the mathematical formulation and several details concerning the implementation. The results indicate that the PetrovGalerkin method does significantly reduce numericaldiffusion errors, while it retains the stability of the first-order, upwind methods. It is also relatively simple to implement. Both the upwind, and PetrovGalerkin, finite element methods demonstrate little sensitivity to grid orientation.

  12. Direct simulation Monte Carlo method with a focal mechanism algorithm

    NASA Astrophysics Data System (ADS)

    Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung

    2015-01-01

    To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.

  13. Advanced virtual energy simulation training and research: IGCC with CO2 capture power plant

    SciTech Connect

    Zitney, S.; Liese, E.; Mahapatra, P.; Bhattacharyya, D.; Provost, G.

    2011-01-01

    In this presentation, we highlight the deployment of a real-time dynamic simulator of an integrated gasification combined cycle (IGCC) power plant with CO{sub 2} capture at the Department of Energy's (DOE) National Energy Technology Laboratory's (NETL) Advanced Virtual Energy Simulation Training and Research (AVESTARTM) Center. The Center was established as part of the DOE's accelerating initiative to advance new clean coal technology for power generation. IGCC systems are an attractive technology option, generating low-cost electricity by converting coal and/or other fuels into a clean synthesis gas mixture in a process that is efficient and environmentally superior to conventional power plants. The IGCC dynamic simulator builds on, and reaches beyond, conventional power plant simulators to merge, for the first time, a 'gasification with CO{sub 2} capture' process simulator with a 'combined-cycle' power simulator. Fueled with coal, petroleum coke, and/or biomass, the gasification island of the simulated IGCC plant consists of two oxygen-blown, downward-fired, entrained-flow, slagging gasifiers with radiant syngas coolers and two-stage sour shift reactors, followed by a dual-stage acid gas removal process for CO{sub 2} capture. The combined cycle island consists of two F-class gas turbines, steam turbine, and a heat recovery steam generator with three-pressure levels. The dynamic simulator can be used for normal base-load operation, as well as plant start-up and shut down. The real-time dynamic simulator also responds satisfactorily to process disturbances, feedstock blending and switchovers, fluctuations in ambient conditions, and power demand load shedding. In addition, the full-scope simulator handles a wide range of abnormal situations, including equipment malfunctions and failures, together with changes initiated through actions from plant field operators. By providing a comprehensive IGCC operator training system, the AVESTAR Center is poised to develop a

  14. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering

    NASA Technical Reports Server (NTRS)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen

    2006-01-01

    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  15. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging.

  16. Aquatic ecosystem protection and restoration: Advances in methods for assessment and evaluation

    USGS Publications Warehouse

    Bain, M.B.; Harig, A.L.; Loucks, D.P.; Goforth, R.R.; Mills, K.E.

    2000-01-01

    Many methods and criteria are available to assess aquatic ecosystems, and this review focuses on a set that demonstrates advancements from community analyses to methods spanning large spatial and temporal scales. Basic methods have been extended by incorporating taxa sensitivity to different forms of stress, adding measures linked to system function, synthesizing multiple faunal groups, integrating biological and physical attributes, spanning large spatial scales, and enabling simulations through time. These tools can be customized to meet the needs of a particular assessment and ecosystem. Two case studies are presented to show how new methods were applied at the ecosystem scale for achieving practical management goals. One case used an assessment of biotic structure to demonstrate how enhanced river flows can improve habitat conditions and restore a diverse fish fauna reflective of a healthy riverine ecosystem. In the second case, multitaxonomic integrity indicators were successful in distinguishing lake ecosystems that were disturbed, healthy, and in the process of restoration. Most methods strive to address the concept of biological integrity and assessment effectiveness often can be impeded by the lack of more specific ecosystem management objectives. Scientific and policy explorations are needed to define new ways for designating a healthy system so as to allow specification of precise quality criteria that will promote further development of ecosystem analysis tools.

  17. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  18. Simulation methods with extended stability for stiff biochemical Kinetics

    PubMed Central

    2010-01-01

    Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems. PMID:20701766

  19. Comparison of Different Methods of Grading a Level Turn Task on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Heath, Bruce E.; Crier, tomyka

    2003-01-01

    With the advancements in the computing power of personal computers, pc-based flight simulators and trainers have opened new avenues in the training of airplane pilots. It may be desirable to have the flight simulator make a quantitative evaluation of the progress of a pilot's training thereby reducing the physical requirement of the flight instructor who must, in turn, watch every flight. In an experiment, University students conducted six different flights, each consisting of two level turns. The flights were three minutes in duration. By evaluating videotapes, two certified flight instructors provided separate letter grades for each turn. These level turns were also evaluated using two other computer based grading methods. One method determined automated grades based on prescribed tolerances in bank angle, airspeed and altitude. The other method used was deviations in altitude and bank angle for performance index and performance grades.

  20. Conceptual frameworks and methods for advancing invasion ecology.

    PubMed

    Heger, Tina; Pahl, Anna T; Botta-Dukát, Zoltan; Gherardi, Francesca; Hoppe, Christina; Hoste, Ivan; Jax, Kurt; Lindström, Leena; Boets, Pieter; Haider, Sylvia; Kollmann, Johannes; Wittmann, Meike J; Jeschke, Jonathan M

    2013-09-01

    Invasion ecology has much advanced since its early beginnings. Nevertheless, explanation, prediction, and management of biological invasions remain difficult. We argue that progress in invasion research can be accelerated by, first, pointing out difficulties this field is currently facing and, second, looking for measures to overcome them. We see basic and applied research in invasion ecology confronted with difficulties arising from (A) societal issues, e.g., disparate perceptions of invasive species; (B) the peculiarity of the invasion process, e.g., its complexity and context dependency; and (C) the scientific methodology, e.g., imprecise hypotheses. To overcome these difficulties, we propose three key measures: (1) a checklist for definitions to encourage explicit definitions; (2) implementation of a hierarchy of hypotheses (HoH), where general hypotheses branch into specific and precisely testable hypotheses; and (3) platforms for improved communication. These measures may significantly increase conceptual clarity and enhance communication, thus advancing invasion ecology.

  1. A new lattice Monte Carlo method for simulating dielectric inhomogeneity

    NASA Astrophysics Data System (ADS)

    Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei

    We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.

  2. Effects of Hourly, Low-Incentive, and High-Incentive Pay on Simulated Work Productivity: Initial Findings with a New Laboratory Method

    ERIC Educational Resources Information Center

    Oah, Shezeen; Lee, Jang-Han

    2011-01-01

    The failures of previous studies to demonstrate productivity differences across different percentages of incentive pay may be partially due to insufficient simulation fidelity. The present study compared the effects of different percentages of incentive pay using a more advanced simulation method. Three payment methods were tested: hourly,…

  3. Atomistic hybrid DSMC/NEMD method for nonequilibrium multiscale simulations

    SciTech Connect

    Gu Kai; Watkins, Charles B. Koplik, Joel

    2010-03-01

    A multiscale hybrid method for coupling the direct simulation Monte Carlo (DSMC) method to the nonequilibrium molecular dynamics (NEMD) method is introduced. The method addresses Knudsen layer type gas flows within a few mean free paths of an interface or about an object with dimensions of the order of a few mean free paths. It employs the NEMD method to resolve nanoscale phenomena closest to the interface along with coupled DSMC simulation of the remainder of the Knudsen layer. The hybrid DSMC/NEMD method is a particle based algorithm without a buffer zone. It incorporates a new, modified generalized soft sphere (MGSS) molecular collision model to improve the poor computational efficiency of the traditional generalized soft sphere GSS model and to achieve DSMC compatibility with Lennard-Jones NEMD molecular interactions. An equilibrium gas, a Fourier thermal flow, and an oscillatory Couette flow, are simulated to validate the method. The method shows good agreement with Maxwell-Boltzmann theory for the equilibrium system, Chapman-Enskog theory for Fourier flow, and pure DSMC simulations for oscillatory Couette flow. Speedup in CPU time of the hybrid solver is benchmarked against a pure NEMD solver baseline for different system sizes and solver domain partitions. Finally, the hybrid method is applied to investigate interaction of argon gas with solid surface molecules in a parametric study of the influence of wetting effects and solid molecular mass on energy transfer and thermal accommodation coefficients. It is determined that wetting effect strength and solid molecular mass have a significant impact on the energy transfer between gas and solid phases and thermal accommodation coefficient.

  4. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  5. Discrete Stochastic Simulation Methods for Chemically Reacting Systems

    PubMed Central

    Cao, Yang; Samuels, David C.

    2012-01-01

    Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that in reality chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods that are based on that theory. We focus on non-stiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's Stochastic Simulation Algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application. PMID:19216925

  6. Design, simulation and evaluation of advanced display concepts for the F-16 control configured vehicle

    NASA Technical Reports Server (NTRS)

    Klein, R. W.; Hollister, W. M.

    1982-01-01

    Advanced display concepts to augment the tracking ability of the F-16 Control Configured Vehicle (CCV) were designed, simulated, and evaluated. A fixed-base simulator was modified to represent the F-16 CCV. An isometric sidearm control stick and two-axis CCV thumb button were installed in the cockpit. The forward cockpit CRT was programmed to present an external scene (numbered runway, horizon) and the designed Heads Up Display. The cockpit interior was modified to represent a fighter and the F-16 CCV dynamics and direct lift and side force modes were programmed. Compensatory displays were designed from man-machine considerations. Pilots evaluated the Heads up Display and compensatory displays during simulated descents in the presence of several levels of filtered, zero-mean winds gusts. During a descent from 2500 feet to the runway, the pilots tracked a point on the runway utilizing the basic F-16, F-16 CCV, and F-16 CCV with advanced displays. Substantial tracking improvements resulted utilizing the CCV modes, and the displays were found to even further enhance the tracking ability of the F-16 CCV.

  7. Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions

    ERIC Educational Resources Information Center

    Syed, Mahbubur Rahman, Ed.

    2009-01-01

    The emerging field of advanced distance education delivers academic courses across time and distance, allowing educators and students to participate in a convenient learning method. "Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions" demonstrates communication technologies, intelligent…

  8. Simulation models of ecological economics developed with energy language methods

    SciTech Connect

    Odum, H.T. . Dept. of Environmental Engineering Sciences)

    1989-08-01

    The energy-systems language method of modelling and simulation, because of its energy constrained rules, is a means for transferring homologous concepts between levels of the hierarchies of nature. Mathematics of self-organization may justify emulation as the simulation of systems overview without details. Here, these methods are applied to the new fields of ecological economics and ecological engineering . Since the vitality of national economics depends on the symbiotic coupling of environmental resources and human economic behavior, the energy language is adapted to develop overview models of nations relevant to public policies. An overview model of a developing nation is given as an example with simulations for alternative policies. Maximum economic vitality was obtained with trade for external resources, but ultimate economic carrying capacity and standard of living was determined by indigenous resources, optimum utilization and absence of foreign debt.

  9. Development of Computational Approaches for Simulation and Advanced Controls for Hybrid Combustion-Gasification Chemical Looping

    SciTech Connect

    Joshi, Abhinaya; Lou, Xinsheng; Neuschaefer, Carl; Chaudry, Majid; Quinn, Joseph

    2012-07-31

    This document provides the results of the project through September 2009. The Phase I project has recently been extended from September 2009 to March 2011. The project extension will begin work on Chemical Looping (CL) Prototype modeling and advanced control design exploration in preparation for a scale-up phase. The results to date include: successful development of dual loop chemical looping process models and dynamic simulation software tools, development and test of several advanced control concepts and applications for Chemical Looping transport control and investigation of several sensor concepts and establishment of two feasible sensor candidates recommended for further prototype development and controls integration. There are three sections in this summary and conclusions. Section 1 presents the project scope and objectives. Section 2 highlights the detailed accomplishments by project task area. Section 3 provides conclusions to date and recommendations for future work.

  10. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  11. Numeric Modified Adomian Decomposition Method for Power System Simulations

    SciTech Connect

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    2016-01-01

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested. It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.

  12. Vectorization of a particle simulation method for hypersonic rarefied flow

    NASA Technical Reports Server (NTRS)

    Mcdonald, Jeffrey D.; Baganoff, Donald

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.

  13. A multiscale quantum mechanics/electromagnetics method for device simulations.

    PubMed

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-01

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  14. Training toward Advanced 3D Seismic Methods for CO2 Monitoring, Verification, and Accounting

    SciTech Connect

    Christopher Liner

    2012-05-31

    The objective of our work is graduate and undergraduate student training related to improved 3D seismic technology that addresses key challenges related to monitoring movement and containment of CO{sub 2}, specifically better quantification and sensitivity for mapping of caprock integrity, fractures, and other potential leakage pathways. We utilize data and results developed through previous DOE-funded CO{sub 2} characterization project (DE-FG26-06NT42734) at the Dickman Field of Ness County, KS. Dickman is a type locality for the geology that will be encountered for CO{sub 2} sequestration projects from northern Oklahoma across the U.S. midcontinent to Indiana and Illinois. Since its discovery in 1962, the Dickman Field has produced about 1.7 million barrels of oil from porous Mississippian carbonates with a small structural closure at about 4400 ft drilling depth. Project data includes 3.3 square miles of 3D seismic data, 142 wells, with log, some core, and oil/water production data available. Only two wells penetrate the deep saline aquifer. In a previous DOE-funded project, geological and seismic data were integrated to create a geological property model and a flow simulation grid. We believe that sequestration of CO{sub 2} will largely occur in areas of relatively flat geology and simple near surface, similar to Dickman. The challenge is not complex geology, but development of improved, lower-cost methods for detecting natural fractures and subtle faults. Our project used numerical simulation to test methods of gathering multicomponent, full azimuth data ideal for this purpose. Our specific objectives were to apply advanced seismic methods to aide in quantifying reservoir properties and lateral continuity of CO{sub 2} sequestration targets. The purpose of the current project is graduate and undergraduate student training related to improved 3D seismic technology that addresses key challenges related to monitoring movement and containment of CO{sub 2

  15. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  16. Toward faster OPC convergence: advanced analysis for OPC iterations and simulation environment

    NASA Astrophysics Data System (ADS)

    Bahnas, Mohamed; Al-Imam, Mohamed; Tawfik, Tamer

    2008-10-01

    Achieving faster Turn-Around-Time (TAT) is one of the most attractive objectives for the silicon wafer manufacturers despite the technology node they are processing. This is valid for all the active technology nodes from 130nm till the cutting edge technologies. There have been several approaches adopted to cut down the OPC simulation runtime without sacrificing the OPC output quality, among them is using stronger CPU power and Hardware acceleration which is a good usage for the advancing powerful processing technology. Another favorable approach for cutting down the runtime is to look deeper inside the used OPC algorithm and the implemented OPC recipe. The OPC algorithm includes the convergence iterations and simulation sites distribution, and the OPC recipe is in definition how to smartly tune the OPC knobs to efficiently use the implemented algorithm. Many previous works were exposed to monitoring the OPC convergence through iterations and analyze the size of the shift per iteration, similarly several works tried to calculate the amount of simulation capacity needed for all these iterations and how to optimize it for less amount. The scope of the work presented here is an attempt to decrease the number of optical simulations by reducing the number of control points per site and without affecting OPC accuracy. The concept is proved by many simulation results and analysis. Implementing this flow illustrated the achievable simulation runtime reduction which is reflected in faster TAT. For its application, it is not just runtime optimization, additionally it puts some more intelligence in the sparse OPC engine by eliminating the headache of specifying the optimum simulation site length.

  17. Simulation on the Measurement Method of Geometric Distortion of Telescopes

    NASA Astrophysics Data System (ADS)

    Fan, Li; Shu-lin, Ren

    2016-07-01

    The accurate measurement on the effect of telescope geometric distortion is conducive to improving the astrometric positioning accuracy of telescopes, which is of significant importance for many disciplines of astronomy, such as stellar clusters, natural satellites, asteroids, comets, and other celestial bodies in the solar system. For this reason, the predecessors have developed an iterative self-calibration method to measure the telescope geometric distortion by dithering observations in a dense star field, and achieved fine results. However, the previous work did not make constraints on the density of star field, and the dithering mode, but chose empirically some good conditions (for example, a denser star field and a larger dithering number) to observe, which took up much observing time, and caused a rather low efficiency. In order to explore the validity of the self-calibration method, and optimize its observational conditions, it is necessary to carry out the corresponding simulations. In this paper, we introduce first the self-calibration method in detail, then by the simulation method, we verify the effectiveness of the self-calibration method, and make further optimizations on the observational conditions, such as the density of star field and the dithering number, to achieve a higher accuracy of geometric distortion measurement. Finally, taking consideration of the practical application for correcting the geometric distortion effect, we have analyzed the relationship between the number of reference stars in the field of view and the astrometric accuracy by virtue of the simulation method.

  18. Using an Advance Organizer to Improve Knowledge Application by Medical Students in Computer-Based Clinical Simulations.

    ERIC Educational Resources Information Center

    Krahn, Corrie G.; Blanchaer, Marcel C.

    1986-01-01

    This study investigated the efficacy of using the advance organizer as a device to improve medical students' understanding of a clinical case simulation on the microcomputer and to enhance performance on a posttest. Advance organizers were found to be effective and most consistent with Mayer's assimilation theory. (MBR)

  19. Crystal level simulations using Eulerian finite element methods

    SciTech Connect

    Becker, R; Barton, N R; Benson, D J

    2004-02-06

    Over the last several years, significant progress has been made in the use of crystal level material models in simulations of forming operations. However, in Lagrangian finite element approaches simulation capabilities are limited in many cases by mesh distortion associated with deformation heterogeneity. Contexts in which such large distortions arise include: bulk deformation to strains approaching or exceeding unity, especially in highly anisotropic or multiphase materials; shear band formation and intersection of shear bands; and indentation with sharp indenters. Investigators have in the past used Eulerian finite element methods with material response determined from crystal aggregates to study steady state forming processes. However, Eulerian and Arbitrary Lagrangian-Eulerian (ALE) finite element methods have not been widely utilized for simulation of transient deformation processes at the crystal level. The advection schemes used in Eulerian and ALE codes control mesh distortion and allow for simulation of much larger total deformations. We will discuss material state representation issues related to advection and will present results from ALE simulations.

  20. Advanced 3D inverse method for designing turbomachine blades

    SciTech Connect

    Dang, T.

    1995-10-01

    To meet the goal of 60% plant-cycle efficiency or better set in the ATS Program for baseload utility scale power generation, several critical technologies need to be developed. One such need is the improvement of component efficiencies. This work addresses the issue of improving the performance of turbo-machine components in gas turbines through the development of an advanced three-dimensional and viscous blade design system. This technology is needed to replace some elements in current design systems that are based on outdated technology.

  1. A method for increasing elders' use of advance directives.

    PubMed

    Luptak, M K; Boult, C

    1994-06-01

    Most published studies report that few elderly people have recorded advance directives (AD). We studied the effectiveness of an interdisciplinary intervention designed to help ambulatory frail elders to record AD. In collaboration with physicians and a trained lay volunteer, a social worker provided information and counseling to the elderly subjects, to their families, and to their proxies in a series of visits to a geriatric evaluation and management (GEM) clinic. Seventy-one percent of the subjects recorded AD. Of these, 96% named a proxy, and 83% recorded specific treatment preferences.

  2. Review of available methods of simulation training to facilitate surgical education.

    PubMed

    Bashankaev, Badma; Baido, Sergey; Wexner, Steven D

    2011-01-01

    The old paradigm of "see one, do one, teach one" has now changed to "see several, learn the skills and simulation, do one, teach one." Modern medicine over the past 30 years has undergone significant revolutions from earlier models made possible by significant technological advances. Scientific and technological progress has made these advances possible not only by increasing the complexity of procedures, but also by increasing the ability to have complex methods of training to perform these sophisticated procedures. Simulators in training labs have been much more embraced outside the operating room, with advanced cardiac life support using hands-on models (CPR "dummy") as well as a fusion with computer-based testing for examinations ranging from the United States medical licensure exam to the examinations administered by the American Board of Surgery and the American Board of Colon and Rectal Surgery. Thus, the development of training methods that test both technical skills and clinical acumen may be essential to help achieve both safety and financial goals. PMID:20552373

  3. System and Method for Finite Element Simulation of Helicopter Turbulence

    NASA Technical Reports Server (NTRS)

    McFarland, R. E. (Inventor); Dulsenberg, Ken (Inventor)

    1999-01-01

    The present invention provides a turbulence model that has been developed for blade-element helicopter simulation. This model uses an innovative temporal and geometrical distribution algorithm that preserves the statistical characteristics of the turbulence spectra over the rotor disc, while providing velocity components in real time to each of five blade-element stations along each of four blades. for a total of twenty blade-element stations. The simulator system includes a software implementation of flight dynamics that adheres to the guidelines for turbulence set forth in military specifications. One of the features of the present simulator system is that it applies simulated turbulence to the rotor blades of the helicopter, rather than to its center of gravity. The simulator system accurately models the rotor penetration into a gust field. It includes time correlation between the front and rear of the main rotor, as well as between the side forces felt at the center of gravity and at the tail rotor. It also includes features for added realism, such as patchy turbulence and vertical gusts in to which the rotor disc penetrates. These features are realized by a unique real time implementation of the turbulence filters. The new simulator system uses two arrays one on either side of the main rotor to record the turbulence field and to produce time-correlation from the front to the rear of the rotor disc. The use of Gaussian Interpolation between the two arrays maintains the statistical properties of the turbulence across the rotor disc. The present simulator system and method may be used in future and existing real-time helicopter simulations with minimal increase in computational workload.

  4. [Research advances in simulating regional crop growth under water stress by remote sensing].

    PubMed

    Zhang, Li; Wang, Shili; Ma, Yuping

    2005-06-01

    It is of practical significance to simulate the regional crop growth under water stress, especially at regional scale. Combined with remote sensing information, crop growth simulation model could provide an effective way to estimate the regional crop growth, development and yield formation under water stress. In this paper, related research methods and results were summarized, and some problems needed to be further studied and resolved were discussed.

  5. Advanced thermal energy management: A thermal test bed and heat pipe simulation

    NASA Technical Reports Server (NTRS)

    Barile, Ronald G.

    1986-01-01

    Work initiated on a common-module thermal test simulation was continued, and a second project on heat pipe simulation was begun. The test bed, constructed from surplus Skylab equipment, was modeled and solved for various thermal load and flow conditions. Low thermal load caused the radiator fluid, Coolanol 25, to thicken due to its temperature avoided by using a regenerator-heat-exchanger. Other possible solutions modeled include a radiator heater and shunting heat from the central thermal bus to the radiator. Also, module air temperature can become excessive with high avionics load. A second preoject concerning advanced heat pipe concepts was initiated. A program was written which calculates fluid physical properties, liquid and vapor pressure in the evaporator and condenser, fluid flow rates, and thermal flux. The program is directed to evaluating newer heat pipe wicks and geometries, especially water in an artery surrounded by six vapor channels. Effects of temperature, groove and slot dimensions, and wick properties are reported.

  6. Neural network setpoint control of an advanced test reactor experiment loop simulation

    SciTech Connect

    Cordes, G.A.; Bryan, S.R.; Powell, R.H.; Chick, D.R.

    1990-09-01

    This report describes the design, implementation, and application of artificial neural networks to achieve temperature and flow rate control for a simulation of a typical experiment loop in the Advanced Test Reactor (ATR) located at the Idaho National Engineering Laboratory (INEL). The goal of the project was to research multivariate, nonlinear control using neural networks. A loop simulation code was adapted for the project and used to create a training set and test the neural network controller for comparison with the existing loop controllers. The results for three neural network designs are documented and compared with existing loop controller action. The neural network was shown to be as accurate at loop control as the classical controllers in the operating region represented by the training set. 9 refs., 28 figs., 2 tabs.

  7. Numerical simulation of the blast impact problem using the Direct Simulation Monte Carlo (DSMC) method

    NASA Astrophysics Data System (ADS)

    Sharma, Anupam; Long, Lyle N.

    2004-10-01

    A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.

  8. Computer simulations of enzyme catalysis: methods, progress, and insights.

    PubMed

    Warshel, Arieh

    2003-01-01

    Understanding the action of enzymes on an atomistic level is one of the important aims of modern biophysics. This review describes the state of the art in addressing this challenge by simulating enzymatic reactions. It considers different modeling methods including the empirical valence bond (EVB) and more standard molecular orbital quantum mechanics/molecular mechanics (QM/MM) methods. The importance of proper configurational averaging of QM/MM energies is emphasized, pointing out that at present such averages are performed most effectively by the EVB method. It is clarified that all properly conducted simulation studies have identified electrostatic preorganization effects as the source of enzyme catalysis. It is argued that the ability to simulate enzymatic reactions also provides the chance to examine the importance of nonelectrostatic contributions and the validity of the corresponding proposals. In fact, simulation studies have indicated that prominent proposals such as desolvation, steric strain, near attack conformation, entropy traps, and coherent dynamics do not account for a major part of the catalytic power of enzymes. Finally, it is pointed out that although some of the issues are likely to remain controversial for some time, computer modeling approaches can provide a powerful tool for understanding enzyme catalysis.

  9. CFD Simulations of a Regenerative Process for Carbon Dioxide Capture in Advanced Gasification Based Power Systems

    SciTech Connect

    Arastoopour, Hamid; Abbasian, Javad

    2014-07-31

    the method of moments, called Finite size domain Complete set of trial functions Method Of Moments (FCMOM) was used to solve the population balance equations. The PBE model was implemented in a commercial CFD code, Ansys Fluent 13.0. The code was used to test the model in some simple cases and the results were verified against available analytical solution in the literature. Furthermore, the code was used to simulate CO2 capture in a packed-bed and the results were in excellent agreement with the experimental data obtained in the packed bed. The National Energy Laboratory (NETL) Carbon Capture Unit (C2U) design was used in simulate of the hydrodynamics of the cold flow gas/solid system (Clark et al.58). The results indicate that the pressure drop predicted by the model is in good agreement with the experimental data. Furthermore, the model was shown to be able to predict chugging behavior, which was observed during the experiment. The model was used as a base-case for simulations of reactive flow at elevated pressure and temperatures. The results indicate that by controlling the solid circulation rate, up to 70% CO2 removal can be achieved and that the solid hold up in the riser is one of the main factors controlling the extent of CO2 removal. The CFD/PBE simulation model indicates that by using a simulated syngas with a composition of 20% CO2, 20% H2O, 30% CO, and 30% H2, the composition (wet basis) in the reactor outlet corresponded to about 60% CO2 capture with and exit gas containing 65% H2. A preliminary base-case-design was developed for a regenerative MgO-based pre-combustion carbon capture process for a 500 MW IGCC power plant. To minimize the external energy requirement, an extensive heat integration network was developed in Aspen/HYSYS® to produce the steam required in the regenerator and heat integration. In this process, liquid CO2 produced at 50 atm can easily be pumped and sequestered or stored. The preliminary economic analyses indicate that the

  10. Kinetic Plasma Simulation Using a Quadrature-based Moment Method

    NASA Astrophysics Data System (ADS)

    Larson, David J.

    2008-11-01

    The recently developed quadrature-based moment method [Desjardins, Fox, and Villedieu, J. Comp. Phys. 227 (2008)] is an interesting alternative to standard Lagrangian particle simulations. The two-node quadrature formulation allows multiple flow velocities within a cell, thus correctly representing crossing particle trajectories and lower-order velocity moments without resorting to Lagrangian methods. Instead of following many particles per cell, the Eulerian transport equations are solved for selected moments of the kinetic equation. The moments are then inverted to obtain a discrete representation of the velocity distribution function. Potential advantages include reduced computational cost, elimination of statistical noise, and a simpler treatment of collisional effects. We present results obtained using the quadrature-based moment method applied to the Vlasov equation in simple one-dimensional electrostatic plasma simulations. In addition we explore the use of the moment inversion process in modeling collisional processes within the Complex Particle Kinetics framework.

  11. Simulation of extrudate swell using an extended finite element method

    NASA Astrophysics Data System (ADS)

    Choi, Young Joon; Hulsen, Martien A.

    2011-09-01

    An extended finite element method (XFEM) is presented for the simulation of extrudate swell. A temporary arbitrary Lagrangian-Eulerian (ALE) scheme is incorporated to cope with the movement of the free surface. The main advantage of the proposed method is that the movement of the free surface can be simulated on a fixed Eulerian mesh without any need of re-meshing. The swell ratio of an upper-convected Maxwell fluid is compared with those of the moving boundary-fitted mesh problems of the conventional ALE technique, and those of Crochet & Keunings (1980). The proposed XFEM combined with the temporary ALE scheme can provide similar accuracy to the boundary-fitted mesh problems for low Deborah numbers. For high Deborah numbers, the method seems to be more stable for the extrusion problem.

  12. A finite mass based method for Vlasov-Poisson simulations

    NASA Astrophysics Data System (ADS)

    Larson, David; Young, Christopher

    2014-10-01

    A method for the numerical simulation of plasma dynamics using discrete particles is introduced. The shape function kinetics (SFK) method is based on decomposing the mass into discrete particles using shape functions of compact support. The particle positions and shape evolve in response to internal velocity spread and external forces. Remapping is necessary in order to maintain accuracy and two strategies for remapping the particles are discussed. Numerical simulations of standard test problems illustrate the advantages of the method which include very low noise compared to the standard particle-in-cell technique, inherent positivity, large dynamic range, and ease of implementation. This work was performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344. C. V. Young acknowledges the support of the DOE NNSA Stewardship Science Graduate Fellowship under Contract DE-FC52-08NA28752.

  13. Non-equilibrium Green function method: theory and application in simulation of nanometer electronic devices

    NASA Astrophysics Data System (ADS)

    Do, Van-Nam

    2014-09-01

    We review fundamental aspects of the non-equilibrium Green function method in the simulation of nanometer electronic devices. The method is implemented into our recently developed computer package OPEDEVS to investigate transport properties of electrons in nano-scale devices and low-dimensional materials. Concretely, we present the definition of the four real-time Green functions, the retarded, advanced, lesser and greater functions. Basic relations among these functions and their equations of motion are also presented in detail as the basis for the performance of analytical and numerical calculations. In particular, we review in detail two recursive algorithms, which are implemented in OPEDEVS to solve the Green functions defined in finite-size opened systems and in the surface layer of semi-infinite homogeneous ones. Operation of the package is then illustrated through the simulation of the transport characteristics of a typical semiconductor device structure, the resonant tunneling diodes.

  14. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    SciTech Connect

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  15. A demonstration of motion base design alternatives for the National Advanced Driving Simulator

    NASA Technical Reports Server (NTRS)

    Mccauley, Michael E.; Sharkey, Thomas J.; Sinacori, John B.; Laforce, Soren; Miller, James C.; Cook, Anthony

    1992-01-01

    A demonstration of the capability of NASA's Vertical Motion Simulator to simulate two alternative motion base designs for the National Advanced Driving simulator (NADS) is reported. The VMS is located at ARC. The motion base conditions used in this demonstration were as follows: (1) a large translational motion base; and (2) a motion base design with limited translational capability. The latter had translational capability representative of a typical synergistic motion platform. These alternatives were selected to test the prediction that large amplitude translational motion would result in a lower incidence or severity of simulator induced sickness (SIS) than would a limited translational motion base. A total of 10 drivers performed two tasks, slaloms and quick-stops, using each of the motion bases. Physiological, objective, and subjective measures were collected. No reliable differences in SIS between the motion base conditions was found in this demonstration. However, in light of the cost considerations and engineering challenges associated with implementing a large translation motion base, performance of a formal study is recommended.

  16. Space-based radar representation in the advanced warfighting simulation (AWARS)

    NASA Astrophysics Data System (ADS)

    Phend, Andrew E.; Buckley, Kathryn; Elliott, Steven R.; Stanley, Page B.; Shea, Peter M.; Rutland, Jimmie A.

    2004-09-01

    Space and orbiting systems impact multiple battlefield operating systems (BOS). Space support to current operations is a perfect example of how the United States fights. Satellite-aided munitions, communications, navigation and weather systems combine to achieve military objectives in a relatively short amount of time. Through representation of space capabilities within models and simulations, the military will have the ability to train and educate officers and soldiers to fight from the high ground of space or to conduct analysis and determine the requirements or utility of transformed forces empowered with advanced space-based capabilities. The Army Vice Chief of Staff acknowledged deficiencies in space modeling and simulation during the September 2001 Space Force Management Analsyis Review (FORMAL) and directed that a multi-disciplinary team be established to recommend a service-wide roadmap to address shortcomings. A Focus Area Collaborative Team (FACT), led by the U.S. Army Space & Missile Defense Command with participation across the Army, confirmed the weaknesses in scope, consistency, correctness, completeness, availability, and usability of space model and simulation (M&S) for Army applications. The FACT addressed the need to develop a roadmap to remedy Space M&S deficiencies using a highly parallelized process and schedule designed to support a recommendation during the Sep 02 meeting of the Army Model and Simulation Executive Council (AMSEC).

  17. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  18. An advanced deterministic method for spent fuel criticality safety analysis

    SciTech Connect

    DeHart, M.D.

    1998-01-01

    Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.

  19. Foot-ankle simulators: A tool to advance biomechanical understanding of a complex anatomical structure.

    PubMed

    Natsakis, Tassos; Burg, Josefien; Dereymaeker, Greta; Jonkers, Ilse; Vander Sloten, Jos

    2016-05-01

    In vitro gait simulations have been available to researchers for more than two decades and have become an invaluable tool for understanding fundamental foot-ankle biomechanics. This has been realised through several incremental technological and methodological developments, such as the actuation of muscle tendons, the increase in controlled degrees of freedom and the use of advanced control schemes. Furthermore, in vitro experimentation enabled performing highly repeatable and controllable simulations of gait during simultaneous measurement of several biomechanical signals (e.g. bone kinematics, intra-articular pressure distribution, bone strain). Such signals cannot always be captured in detail using in vivo techniques, and the importance of in vitro experimentation is therefore highlighted. The information provided by in vitro gait simulations enabled researchers to answer numerous clinical questions related to pathology, injury and surgery. In this article, first an overview of the developments in design and methodology of the various foot-ankle simulators is presented. Furthermore, an overview of the conducted studies is outlined and an example of a study aiming at understanding the differences in kinematics of the hindfoot, ankle and subtalar joints after total ankle arthroplasty is presented. Finally, the limitations and future perspectives of in vitro experimentation and in particular of foot-ankle gait simulators are discussed. It is expected that the biofidelic nature of the controllers will be improved in order to make them more subject-specific and to link foot motion to the simulated behaviour of the entire missing body, providing additional information for understanding the complex anatomical structure of the foot. PMID:27160562

  20. The gap-tooth method in particle simulations

    NASA Astrophysics Data System (ADS)

    Gear, C. William; Li, Ju; Kevrekidis, Ioannis G.

    2003-09-01

    We explore the gap-tooth method for multiscale modeling of systems represented by microscopic physics-based simulators, when coarse-grained evolution equations are not available in closed form. A biased random walk particle simulation, motivated by the viscous Burgers equation, serves as an example. We construct macro-to-micro (lifting) and micro-to-macro (restriction) operators, and drive the coarse time-evolution by particle simulations in appropriately coupled microdomains (“teeth”) separated by large spatial gaps. A macroscopically interpolative mechanism for communication between the teeth at the particle level is introduced. The results demonstrate the feasibility of a “closure-on-demand” approach to solving some hydrodynamics problems.

  1. The molecular basis of social behavior: models, methods and advances.

    PubMed

    LeBoeuf, Adria C; Benton, Richard; Keller, Laurent

    2013-02-01

    Elucidating the molecular and neural basis of complex social behaviors such as communal living, division of labor and warfare requires model organisms that exhibit these multi-faceted behavioral phenotypes. Social insects, such as ants, bees, wasps and termites, are attractive models to address this problem, with rich ecological and ethological foundations. However, their atypical systems of reproduction have hindered application of classical genetic approaches. In this review, we discuss how recent advances in social insect genomics, transcriptomics, and functional manipulations have enhanced our ability to observe and perturb gene expression, physiology and behavior in these species. Such developments begin to provide an integrated view of the molecular and cellular underpinnings of complex social behavior. PMID:22995551

  2. Recent advances in neutral particle transport methods and codes

    NASA Astrophysics Data System (ADS)

    Azmy, Yousry Y.

    1997-02-01

    An overview of Oak Ridge National Laboratory's (ORNL) 3D neural particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed in some detail. These include: multitasking on Cray platforms running the UNICOS operating system; adjacent-cell preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications will be discussed. Speculation on the next generation of neutral particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, will also be mentioned.

  3. Advances in microfluidics-based experimental methods for neuroscience research.

    PubMed

    Park, Jae Woo; Kim, Hyung Joon; Kang, Myeong Woo; Jeon, Noo Li

    2013-02-21

    The application of microfluidics to neuroscience applications has always appealed to neuroscientists because of the capability to control the cellular microenvironment in both a spatial and temporal manner. Recently, there has been rapid development of biological micro-electro-mechanical systems (BioMEMS) for both fundamental and applied neuroscience research. In this review, we will discuss the applications of BioMEMS to various topics in the field of neuroscience. The purpose of this review is to summarise recent advances in the components and design of the BioMEMS devices, in vitro disease models, electrophysiology and neural stem cell research. We envision that microfluidics will play a key role in future neuroscience research, both fundamental and applied research.

  4. Recent advances in neutral particle transport methods and codes

    SciTech Connect

    Azmy, Y.Y.

    1996-06-01

    An overview of ORNL`s three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned.

  5. Simulated scaling method for localized enhanced sampling and simultaneous "alchemical" free energy simulations: a general method for molecular mechanical, quantum mechanical, and quantum mechanical/molecular mechanical simulations.

    PubMed

    Li, Hongzhi; Fajer, Mikolai; Yang, Wei

    2007-01-14

    A potential scaling version of simulated tempering is presented to efficiently sample configuration space in a localized region. The present "simulated scaling" method is developed with a Wang-Landau type of updating scheme in order to quickly flatten the distributions in the scaling parameter lambdam space. This proposal is meaningful for a broad range of biophysical problems, in which localized sampling is required. Besides its superior capability and robustness in localized conformational sampling, this simulated scaling method can also naturally lead to efficient "alchemical" free energy predictions when dual-topology alchemical hybrid potential is applied; thereby simultaneously, both of the chemically and conformationally distinct portions of two end point chemical states can be efficiently sampled. As demonstrated in this work, the present method is also feasible for the quantum mechanical and quantum mechanical/molecular mechanical simulations.

  6. A driver linac for the Advanced Exotic Beam Laboratory : physics design and beam dynamics simulations.

    SciTech Connect

    Ostroumov, P. N.; Mustapha, B.; Nolen, J.; Physics

    2007-01-01

    The Advanced Exotic Beam Laboratory (AEBL) being developed at ANL consists of an 833 MV heavy-ion driver linac capable of producing uranium ions up to 200 MeV/u and protons to 580 MeV with 400 kW beam power. We have designed all accelerator components including a two charge state LEBT, an RFQ, a MEBT, a superconducting linac, a stripper station and chicane. We present the results of an optimized linac design and end-to-end simulations including machine errors and detailed beam loss analysis. The Advanced Exotic Beam Laboratory (AEBL) has been proposed at ANL as a reduced scale of the original Rare Isotope Accelerator (RIA) project with about half the cost but the same beam power. AEBL will address 90% or more of RIA physics but with reduced multi-users capabilities. The focus of this paper is the physics design and beam dynamics simulations of the AEBL driver linac. The reported results are for a multiple charge state U{sup 238} beam.

  7. Ductile damage prediction in metal forming processes: Advanced modeling and numerical simulation

    NASA Astrophysics Data System (ADS)

    Saanouni, K.

    2013-05-01

    This paper describes the needs required in modern virtual metal forming including both sheet and bulk metal forming of mechanical components. These concern the advanced modeling of thermo-mechanical behavior including the multiphysical phenomena and their interaction or strong coupling, as well as the associated numerical aspects using fully adaptive simulation strategies. First a survey of advanced constitutive equations accounting for the main thermomechanical phenomena as the thermo-elasto-plastic finite strains with isotropic and kinematic hardenings fully coupled with ductile damage will be presented. Only the macroscopic phenomenological approach with state variables (monoscale approach) will be discussed in the general framework of the rational thermodynamics for generalized micromorphic continua. The micro-macro (multi-scales approach) in the framework of polycrystalline inelasticity is not presented here for the sake of shortness but will be presented during the oral presentation. The main numerical aspects related to the resolution of the associated initial and boundary value problem will be outlined. A fully adaptive numerical methodology will be briefly described and some numerical examples will be given in order to show the high predictive capabilities of this adaptive methodology for virtual metal forming simulations.

  8. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H.; Gilinsky, Mikhail M.

    2001-01-01

    Three connected sub-projects were conducted under reported project. Partially, these sub-projects are directed to solving the problems conducted by the HU/FM&AL under two other NASA grants. The fundamental idea uniting these projects is to use untraditional 3D corrugated nozzle designs and additional methods for exhaust jet noise reduction without essential thrust lost and even with thrust augmentation. Such additional approaches are: (1) to add some solid, fluid, or gas mass at discrete locations to the main supersonic gas stream to minimize the negative influence of strong shock waves forming in propulsion systems; this mass addition may be accompanied by heat addition to the main stream as a result of the fuel combustion or by cooling of this stream as a result of the liquid mass evaporation and boiling; (2) to use porous or permeable nozzles and additional shells at the nozzle exit for preliminary cooling of exhaust hot jet and pressure compensation for non-design conditions (so-called continuous ejector with small mass flow rate; and (3) to propose and analyze new effective methods fuel injection into flow stream in air-breathing engines. Note that all these problems were formulated based on detailed descriptions of the main experimental facts observed at NASA Glenn Research Center. Basically, the HU/FM&AL Team has been involved in joint research with the purpose of finding theoretical explanations for experimental facts and the creation of the accurate numerical simulation technique and prediction theory for solutions for current problems in propulsion systems solved by NASA and Navy agencies. The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analysis for advanced aircraft and rocket engines. The F&AL Team uses analytical methods, numerical simulations, and possible experimental tests at the Hampton University campus. We will present some management activity

  9. Adherence to Scientific Method while Advancing Exposure Science

    EPA Science Inventory

    Paul Lioy was simultaneously a staunch adherent to the scientific method and an innovator of new ways to conduct science, particularly related to human exposure. Current challenges to science and the application of the scientific method are presented as they relate the approaches...

  10. Comparison of advanced distillation control methods. Second annual report

    SciTech Connect

    Riggs, J.B.

    1996-11-01

    Detailed dynamic simulations of two industrial distillation columns (a propylene/propane splitter and a xylene/toluene column) have been used to study the issue of configuration selection for diagonal PI dual composition controls. Auto Tune Variation (ATV) identification with on-line detuning was used for tuning the diagonal proportional integral (PI) composition controls. Each configuration was evaluated with respect to steady-state relative gain array (RGA) values, sensitivity to feed composition changes, and open loop dynamic performance. Each configuration was tuned using setpoint changes over a wider range of operation for robustness and tested for feed composition upsets. Overall, configuration selection was shown to have a dominant effect upon control performance. Configuration analysis tools (e.g., RGA, condition number, disturbance sensitivity) were found to reject configuration choices that are obviously poor choices, but were unable to critically differentiate between the remaining viable choices. Configuration selection guidelines are given although it is demonstrated that the most reliable configuration selection approach is based upon testing the viable configurations using dynamic column simulators.

  11. Comparison of advanced distillation control methods. Second annual report

    SciTech Connect

    1996-11-01

    Detailed dynamic simulations of three industrial distillation columns (a propylene/propane splitter, a xylene/toluene column, and a depropanizer) have been used to study the issue of configuration selection for diagonal PI dual composition controls. ATV identification with on-line detuning was used for tuning the diagonal PI composition controllers. Each configuration was evaluated with respect to steady-state RGA values, sensitivity to feed composition changes, and open loop dynamic performance. Each configuration was tuned using setpoint changes over a wider range of operation for robustness and tested for feed composition upsets. Overall, configuration selection was shown to have a dominant effect upon control performance. Configuration analysis tools (e.g., RGA, condition number, disturbance sensitivity), were found to reject configuration choices that are obviously poor choices, but were unable to critically differentiate between the remaining viable choices. Configuration selection guidelines are given although it is demonstrated that the most reliable configuration selection approach is based upon testing the viable configurations using dynamic column simulators.

  12. [Recent advances in sample preparation methods of plant hormones].

    PubMed

    Wu, Qian; Wang, Lus; Wu, Dapeng; Duan, Chunfeng; Guan, Yafeng

    2014-04-01

    Plant hormones are a group of naturally occurring trace substances which play a crucial role in controlling the plant development, growth and environment response. With the development of the chromatography and mass spectroscopy technique, chromatographic analytical method has become a widely used way for plant hormone analysis. Among the steps of chromatographic analysis, sample preparation is undoubtedly the most vital one. Thus, a highly selective and efficient sample preparation method is critical for accurate identification and quantification of phytohormones. For the three major kinds of plant hormones including acidic plant hormones & basic plant hormones, brassinosteroids and plant polypeptides, the sample preparation methods are reviewed in sequence especially the recently developed methods. The review includes novel methods, devices, extractive materials and derivative reagents for sample preparation of phytohormones analysis. Especially, some related works of our group are included. At last, the future developments in this field are also prospected.

  13. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Liebig, Mark; Franzluebbers, Alan J.; Follett, Ronald F.; Hively, W. Dean; Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco

    2012-01-01

    The gold standard for soil C determination is combustion. However, this method requires expensive consumables, is limited to the determination of the total carbon and in the number of samples which can be processed (~100/d). With increased interest in soil C sequestration, faster methods are needed. Thus, interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared ranges using either proximal or remote sensing. These methods have the ability to analyze more samples (2 to 3X/d) or huge areas (imagery) and do multiple analytes simultaneously, but require calibrations relating spectral and reference data and have specific problems, i.e., remote sensing is capable of scanning entire watersheds, thus reducing the sampling needed, but is limiting to the surface layer of tilled soils and by difficulty in obtaining proper calibration reference values. The objective of this discussion is the present state of spectroscopic methods for soil C determination.

  14. An Efficient Ray-Tracing Method for Determining Terrain Intercepts in EDL Simulations

    NASA Technical Reports Server (NTRS)

    Shidner, Jeremy D.

    2016-01-01

    The calculation of a ray's intercept from an arbitrary point in space to a prescribed surface is a common task in computer simulations. The arbitrary point often represents an object that is moving according to the simulation, while the prescribed surface is fixed in a defined frame. For detailed simulations, this surface becomes complex, taking the form of real-world objects such as mountains, craters or valleys which require more advanced methods to accurately calculate a ray's intercept location. Incorporation of these complex surfaces has commonly been implemented in graphics systems that utilize highly optimized graphics processing units to analyze such features. This paper proposes a simplified method that does not require computationally intensive graphics solutions, but rather an optimized ray-tracing method for an assumed terrain dataset. This approach was developed for the Mars Science Laboratory mission which landed on the complex terrain of Gale Crater. First, this paper begins with a discussion of the simulation used to implement the model and the applicability of finding surface intercepts with respect to atmosphere modeling, altitude determination, radar modeling, and contact forces influencing vehicle dynamics. Next, the derivation and assumptions of the intercept finding method are presented. Key assumptions are noted making the routines specific to only certain types of surface data sets that are equidistantly spaced in longitude and latitude. The derivation of the method relies on ray-tracing, requiring discussion on the formulation of the ray with respect to the terrain datasets. Further discussion includes techniques for ray initialization in order to optimize the intercept search. Then, the model implementation for various new applications in the simulation are demonstrated. Finally, a validation of the accuracy is presented along with the corresponding data sets used in the validation. A performance summary of the method will be shown using

  15. Remembrance of phases past: An autoregressive method for generating realistic atmospheres in simulations

    NASA Astrophysics Data System (ADS)

    Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.

    2014-08-01

    The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.

  16. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  17. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q.

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments relies upon the global momentum conservation of the fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. Numerical examples illustrate the method's application to predicting bulk fluid motion including lateral propellant slosh in low-g conditions.

  18. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments is based upon a novel approach that relies on the global momentum conservation of the closed fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. A numerical example illustrates the method's application to prediction of bulk fluid behavior during a spacecraft ullage settling maneuver.

  19. Calibration of three rainfall simulators with automatic measurement methods

    NASA Astrophysics Data System (ADS)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  20. Assessment of Crack Detection in Cast Austenitic Piping Components Using Advanced Ultrasonic Methods.

    SciTech Connect

    Anderson, Michael T.; Crawford, Susan L.; Cumblidge, Stephen E.; Diaz, Aaron A.; Doctor, Steven R.

    2007-01-01

    Studies conducted at the Pacific N¬orthwest National Laboratory (PNNL) in Richland, Washington, have focused on developing and evaluating the reliability of nondestructive examination (NDE) approaches for inspecting coarse-grained, cast stainless steel reactor components. The objective of this work is to provide information to the United States Nuclear Regulatory Commission (NRC) on the utility, effec¬tiveness and limitations of ultrasonic testing (UT) inspection techniques as related to the in-service inspec¬tion of primary system piping components in pressurized water reactors (PWRs). Cast stainless steel pipe specimens were examined that contain thermal and mechanical fatigue cracks located close to the weld roots and have inside/outside surface geometrical conditions that simulate several PWR primary piping configurations. In addition, segments of vintage centrifugally cast piping were also examined to understand inherent acoustic noise and scattering due to grain structures and determine consistency of UT responses from different locations. The advanced UT methods were applied from the outside surface of these specimens using automated scanning devices and water coupling. The low-frequency ultrasonic method employed a zone-focused, multi-incident angle inspection protocol (operating at 250-450 kHz) coupled with a synthetic aperture focusing technique (SAFT) for improved signal-to-noise and advanced imaging capabilities. The phased array approach was implemented with a modified instrument operating at 500 kHz and composite volumetric images of the specimens were generated. Re¬sults from laboratory studies for assessing detection, localization and sizing effectiveness are discussed in this paper.

  1. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  2. Preface: Special Topic Section on Advanced Electronic Structure Methods for Solids and Surfaces

    SciTech Connect

    Michaelides, Angelos; Martinez, Todd J.; Alavi, Ali; Kresse, Georg

    2015-09-14

    This Special Topic section on Advanced Electronic Structure Methods for Solids and Surfaces contains a collection of research papers that showcase recent advances in the high accuracy prediction of materials and surface properties. It provides a timely snapshot of a growing field that is of broad importance to chemistry, physics, and materials science.

  3. Integrating Advanced High School Chemistry Research with Organic Chemistry and Instrumental Methods of Analysis

    ERIC Educational Resources Information Center

    Kennedy, Brian J.

    2008-01-01

    This paper describes and discusses the unique chemistry course opportunities beyond the advanced placement-level available at a science and technology magnet high school. Students may select entry-level courses such as honors and advanced placement chemistry; they may also take electives in organic chemistry with instrumental methods of analysis;…

  4. Investigation of advancing front method for generating unstructured grid

    NASA Astrophysics Data System (ADS)

    Thomas, A. M.; Tiwari, S. N.

    1992-06-01

    The advancing front technique is used to generate an unstructured grid about simple aerodynamic geometries. Unstructured grids are generated using VGRID2D and VGRID3D software. Specific problems considered are a NACA 0012 airfoil, a bi-plane consisting of two NACA 0012 airfoil, a four element airfoil in its landing configuration, and an ONERA M6 wing. Inviscid time dependent solutions are computed on these geometries using USM3D and the results are compared with standard test results obtained by other investigators. A grid convergence study is conducted for the NACA 0012 airfoil and compared with a structured grid. A structured grid is generated using GRIDGEN software and inviscid solutions computed using CFL3D flow solver. The results obtained by unstructured grid for NACA 0012 airfoil showed an asymmetric distribution of flow quantities, and a fine distribution of grid was required to remove this asymmetry. On the other hand, the structured grid predicted a very symmetric distribution, but when the total number of points were compared to obtain the same results it was seen that structured grid required more grid points.

  5. Health, wealth, and air pollution: advancing theory and methods.

    PubMed Central

    O'Neill, Marie S; Jerrett, Michael; Kawachi, Ichiro; Levy, Jonathan I; Cohen, Aaron J; Gouveia, Nelson; Wilkinson, Paul; Fletcher, Tony; Cifuentes, Luis; Schwartz, Joel

    2003-01-01

    The effects of both ambient air pollution and socioeconomic position (SEP) on health are well documented. A limited number of recent studies suggest that SEP may itself play a role in the epidemiology of disease and death associated with exposure to air pollution. Together with evidence that poor and working-class communities are often more exposed to air pollution, these studies have stimulated discussion among scientists, policy makers, and the public about the differential distribution of the health impacts from air pollution. Science and public policy would benefit from additional research that integrates the theory and practice from both air pollution and social epidemiologies to gain a better understanding of this issue. In this article we aim to promote such research by introducing readers to methodologic and conceptual approaches in the fields of air pollution and social epidemiology; by proposing theories and hypotheses about how air pollution and socioeconomic factors may interact to influence health, drawing on studies conducted worldwide; by discussing methodologic issues in the design and analysis of studies to determine whether health effects of exposure to ambient air pollution are modified by SEP; and by proposing specific steps that will advance knowledge in this field, fill information gaps, and apply research results to improve public health in collaboration with affected communities. PMID:14644658

  6. Comparison of advanced distillation control methods. Third annual report

    SciTech Connect

    Riggs, J.B.

    1997-07-01

    Detailed dynamic simulations of three industrial distillation columns (a propylene/propane splitter, a xylene/toluene column, and a depropanizer) have been used to study the issue of configuration selection for diagonal PI dual composition controls, feedforward from a feed composition analyzer, and decouplers. Auto Tune Variation (ATV) identification with on-line detuning for setpoint changes was used for tuning the diagonal proportional integral (PI) composition controls. In addition, robustness tests were conducted by inducting reboiler duty upsets. For single composition control, the (L, V) configuration was found to be best. For dual composition control, the optimum configuration changes from one column to another. Moreover, the use of analysis tools, such as RGA, appears to be of little value in identifying the optimum configuration for dual composition control. Using feedforward from a feed composition analyzer and using decouplers are shown to offer significant advantages for certain specific cases.

  7. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean

    2012-01-01

    The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.

  8. Numerical approximation of a nonlinear delay-advance functional differential equation by a finite element method

    NASA Astrophysics Data System (ADS)

    Teodoro, M. F.

    2012-09-01

    We are particularly interested in the numerical solution of the functional differential equations with symmetric delay and advance. In this work, we consider a nonlinear forward-backward equation, the Fitz Hugh-Nagumo equation. It is presented a scheme which extends the algorithm introduced in [1]. A computational method using Newton's method, finite element method and method of steps is developped.

  9. Rare-event simulation methods for equilibrium and non-equilibrium events

    NASA Astrophysics Data System (ADS)

    Ziff, Robert

    2014-03-01

    Rare events are those that occur with a very low probability in experiment, or are common but difficult to sample using standard computer simulation techniques. Such processes require advanced methods in order to obtain useful results in reasonable amounts of computer time. We discuss some of those techniques here, including the ``barrier'' method, splitting methods, and a Forward-Flux Sampling in Time (FFST) algorithm, and apply them to measure the nucleation times of the first-order transition in the Ziff-Gulari-Barshad model of surface catalysis, including nucleation in finite equilibrium states, which are measured to occur with probabilities as low as 10°C(-40). We also study the transitions in the Maier-Stein model of chemical kinetics, and use the methods to find the harmonic measure in percolation and Diffusion-Limited Aggregation (DLA) clusters. co-authors: David Adams, Google, and Leonard Sander, University of Michigan.

  10. The Local Variational Multiscale Method for Turbulence Simulation.

    SciTech Connect

    Collis, Samuel Scott; Ramakrishnan, Srinivas

    2005-05-01

    Accurate and efficient turbulence simulation in complex geometries is a formidable chal-lenge. Traditional methods are often limited by low accuracy and/or restrictions to simplegeometries. We explore the merger of Discontinuous Galerkin (DG) spatial discretizationswith Variational Multi-Scale (VMS) modeling, termed Local VMS (LVMS), to overcomethese limitations. DG spatial discretizations support arbitrarily high-order accuracy on un-structured grids amenable for complex geometries. Furthermore, high-order, hierarchicalrepresentation within DG provides a natural framework fora prioriscale separation crucialfor VMS implementation. We show that the combined benefits of DG and VMS within theLVMS method leads to promising new approach to LES for use in complex geometries.The efficacy of LVMS for turbulence simulation is assessed by application to fully-developed turbulent channelflow. First, a detailed spatial resolution study is undertakento record the effects of the DG discretization on turbulence statistics. Here, the localhp[?]refinement capabilites of DG are exploited to obtain reliable low-order statistics effi-ciently. Likewise, resolution guidelines for simulating wall-bounded turbulence using DGare established. We also explore the influence of enforcing Dirichlet boundary conditionsindirectly through numericalfluxes in DG which allows the solution to jump (slip) at thechannel walls. These jumps are effective in simulating the influence of the wall commen-surate with the local resolution and this feature of DG is effective in mitigating near-wallresolution requirements. In particular, we show that by locally modifying the numericalviscousflux used at the wall, we are able to regulate the near-wall slip through a penaltythat leads to improved shear-stress predictions. This work, demonstrates the potential ofthe numerical viscousflux to act as a numerically consistent wall-model and this successwarrents future research.As in any high-order numerical method some

  11. Individualized feedback during simulated laparoscopic training: a mixed methods study

    PubMed Central

    Weurlander, Maria; Hedman, Leif; Nisell, Henry; Lindqvist, Pelle G.; Felländer-Tsai, Li; Enochsson, Lars

    2015-01-01

    Objectives This study aimed to explore the value of indi-vidualized feedback on performance, flow and self-efficacy during simulated laparoscopy. Furthermore, we wished to explore attitudes towards feedback and simulator training among medical students. Methods Sixteen medical students were included in the study and randomized to laparoscopic simulator training with or without feedback. A teacher provided individualized feedback continuously throughout the procedures to the target group. Validated questionnaires and scales were used to evaluate self-efficacy and flow. The Mann-Whitney U test was used to evaluate differences between groups regarding laparoscopic performance (instrument path length), self-efficacy and flow. Qualitative data was collected by group interviews and interpreted using inductive thematic analyses. Results Sixteen students completed the simulator training and questionnaires. Instrument path length was shorter in the feedback group (median 3.9 m; IQR: 3.3-4.9) as com-pared to the control group (median 5.9 m; IQR: 5.0-8.1), p<0.05. Self-efficacy improved in both groups. Eleven students participated in the focus interviews. Participants in the control group expressed that they had fun, whereas participants in the feedback group were more concentrated on the task and also more anxious. Both groups had high ambitions to succeed and also expressed the importance of getting feedback. The authenticity of the training scenario was important for the learning process. Conclusions This study highlights the importance of individualized feedback during simulated laparoscopy training. The next step is to further optimize feedback and to transfer standardized and individualized feedback from the simulated setting to the operating room. PMID:26223033

  12. Particle Splitting: A New Method for SPH Star Formation Simulations

    NASA Astrophysics Data System (ADS)

    Kitsionas, Spyridon

    2003-07-01

    We have invented a new algorithm to use with self-gravitating SPH Star Formation codes. The new method is designed to enable SPH simulations to self-regulate their numerical resolution, i.e. the number of SPH particles; the latter is calculated using the Jeans condition (Bate & Burkert 1997) and the local hydrodynamic conditions of the gas. We apply our SPH with Particle Splitting code to cloud-cloud collision simulations. Chapter 2 lists the properties of our standard SPH code. Chapter 3 discusses the efficiency of the standard code as this is applied to simulations of rotating, uniform clouds with m=2 density perturbations. Chapter 4 [astro-ph/0203057] describes the new method and the tests that it has successfully been applied to. It also contains the results of the application of Particle Splitting to the case of rotating clouds as those of Chapter 3, where, with great computational efficiency, we have reproduced the results of FD codes and SPH simulations with large numbers of particles. Chapter 5 gives a detailed account of the cloud-cloud collisions studied, starting from a variety of initial conditions produced by altering the cloud mass, cloud velocity and the collision impact parameter. In the majority of the cases studied, the collisions produced filaments (similar to those observed in ammonia in nearby Star Forming Regions) or networks of filaments; groups of protostellar cores have been produced by fragmentation of the filaments. The accretion rates at these cores are comparable to those of Class 0 objects. Due to time-step constraints the simulations stop early in their evolution. The star formation efficiency of this mechanism is extrapolated in time and is found to be 10-20%.

  13. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our

  14. The advance of non-invasive detection methods in osteoarthritis

    NASA Astrophysics Data System (ADS)

    Dai, Jiao; Chen, Yanping

    2011-06-01

    Osteoarthritis (OA) is one of the most prevalent chronic diseases which badly affected the patients' living quality and economy. Detection and evaluation technology can provide basic information for early treatment. A variety of imaging methods in OA were reviewed, such as conventional X-ray, computed tomography (CT), ultrasound (US), magnetic resonance imaging (MRI) and near-infrared spectroscopy (NIRS). Among the existing imaging modalities, the spatial resolution of X-ray is extremely high; CT is a three-dimensional method, which has high density resolution; US as an evaluation method of knee OA discriminates lesions sensitively between normal cartilage and degenerative one; as a sensitive and nonionizing method, MRI is suitable for the detection of early OA, but the cost is too expensive for routine use; NIRS is a safe, low cost modality, and is also good at detecting early stage OA. In a word, each method has its own advantages, but NIRS is provided with broader application prospect, and it is likely to be used in clinical daily routine and become the golden standard for diagnostic detection.

  15. ADVANCED URBANIZED METEOROLOGICAL MODELING AND AIR QUALITY SIMULATIONS WITH CMAQ AT NEIGHBORHOOD SCALES

    EPA Science Inventory

    We present results from a study testing the new boundary layer parameterization method, the canopy drag approach (DA) which is designed to explicitly simulate the effects of buildings, street and tree canopies on the dynamic, thermodynamic structure and dispersion fields in urban...

  16. Design and development of a virtual reality simulator for advanced cardiac life support training.

    PubMed

    Vankipuram, Akshay; Khanal, Prabal; Ashby, Aaron; Vankipuram, Mithra; Gupta, Ashish; DrummGurnee, Denise; Josey, Karen; Smith, Marshall

    2014-07-01

    The use of virtual reality (VR) training tools for medical education could lead to improvements in the skills of clinicians while providing economic incentives for healthcare institutions. The use of VR tools can also mitigate some of the drawbacks currently associated with providing medical training in a traditional clinical environment such as scheduling conflicts and the need for specialized equipment (e.g., high-fidelity manikins). This paper presents the details of the framework and the development methodology associated with a VR-based training simulator for advanced cardiac life support, a time critical, team-based medical scenario. In addition, we also report the key findings of a usability study conducted to assess the efficacy of various features of this VR simulator through a postuse questionnaire administered to various care providers. The usability questionnaires were completed by two groups that used two different versions of the VR simulator. One version consisted of the VR trainer with it all its features and a minified version with certain immersive features disabled. We found an increase in usability scores from the minified group to the full VR group.

  17. Annoyance response to simulated advanced turboprop aircraft interior noise containing tonal beats

    NASA Technical Reports Server (NTRS)

    Leatherwood, Jack D.

    1987-01-01

    A study is done to investigate the effects on subjective annoyance of simulated advanced turboprop (ATP) interior noise environments containing tonal beats. The simulated environments consisted of low-frequency tones superimposed on a turbulent-boundary-layer noise spectrum. The variables used in the study included propeller tone frequency (100 to 250 Hz), propeller tone levels (84 to 105 dB), and tonal beat frequency (0 to 1.0 Hz). Results indicated that propeller tones within the simulated ATP environment resulted in increased annoyance response that was fully predictable in terms of the increase in overall sound pressure level due to the tones. Implications for ATP aircraft include the following: (1) the interior noise environment with propeller tones is more annoying than an environment without tones if the tone is present at a level sufficient to increase the overall sound pressure level; (2) the increased annoyance due to the fundamental propeller tone frequency without harmonics is predictable from the overall sound pressure level; and (3) no additional noise penalty due to the perception of single discrete-frequency tones and/or beats was observed.

  18. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  19. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  20. Axon voltage-clamp simulations. I. Methods and tests.

    PubMed Central

    Moore, J W; Ramón, F; Joyner, R W

    1975-01-01

    This is the first in a series of four papers in which we present the numerical simulation of the application of the voltage clamp technique to excitable cells. In this paper we describe the application of the Crank-Nicolson (1947) method for the solution of the parabolic partial differential equations that describe a cylindrical cell in which the ionic conductances are functions of voltage and time (Hodgkin and Huxley, 1952). This method is compared with other methods in terms of accuracy and speed of solution for a propagated action potential. In addition, differential equations representing a simple voltage-clamp electronic circuit are presented. Using the voltage clamp circuit equations, we simulate the voltage clamp of a single isopotential membrane patch and show how the parameters of the circuit affect the transient response of the patch to a step change in the control potential.The stimulation methods presented in this series of papers allow the evaluation of voltage clamp control of an excitable cell or a syncytium of excitable cells. To the extent that membrane parameters and geometrical factors can be determined, the methods presented here provide solutions for the voltage profile as a function of time. PMID:1174640

  1. Methods for simulating solute breakthrough curves in pumping groundwater wells

    USGS Publications Warehouse

    Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Robbins, Gary A.

    2012-01-01

    In modeling there is always a trade-off between execution time and accuracy. For gradient-based parameter estimation methods, where a simulation model is run repeatedly to populate a Jacobian (sensitivity) matrix, there exists a need for rapid simulation methods of known accuracy that can decrease execution time, and thus make the model more useful without sacrificing accuracy. Convolution-based methods can be executed rapidly for any desired input function once the residence-time distribution is known. The residence-time distribution can be calculated efficiently using particle tracking, but particle tracking can be ambiguous near a pumping well if the grid is too coarse. We present several embedded analytical expressions for improving particle tracking near a pumping well and compare them with a finely gridded finite-difference solution in terms of accuracy and CPU usage. Even though the embedded analytical approach can improve particle tracking near a well, particle methods reduce, but do not eliminate, reliance on a grid because velocity fields typically are calculated on a grid, and additional error is incurred using linear interpolation of velocity. A dilution rate can be calculated for a given grid and pumping well to determine if the grid is sufficiently refined. Embedded analytical expressions increase accuracy but add significantly to CPU usage. Structural error introduced by the numerical solution method may affect parameter estimates.

  2. Some Developments of the Equilibrium Particle Simulation Method for the Direct Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Macrossan, M. N.

    1995-01-01

    The direct simulation Monte Carlo (DSMC) method is the established technique for the simulation of rarefied gas flows. In some flows of engineering interest, such as occur for aero-braking spacecraft in the upper atmosphere, DSMC can become prohibitively expensive in CPU time because some regions of the flow, particularly on the windward side of blunt bodies, become collision dominated. As an alternative to using a hybrid DSMC and continuum gas solver (Euler or Navier-Stokes solver) this work is aimed at making the particle simulation method efficient in the high density regions of the flow. A high density, infinite collision rate limit of DSMC, the Equilibrium Particle Simulation method (EPSM) was proposed some 15 years ago. EPSM is developed here for the flow of a gas consisting of many different species of molecules and is shown to be computationally efficient (compared to DSMC) for high collision rate flows. It thus offers great potential as part of a hybrid DSMC/EPSM code which could handle flows in the transition regime between rarefied gas flows and fully continuum flows. As a first step towards this goal a pure EPSM code is described. The next step of combining DSMC and EPSM is not attempted here but should be straightforward. EPSM and DSMC are applied to Taylor-Couette flow with Kn = 0.02 and 0.0133 and S(omega) = 3). Toroidal vortices develop for both methods but some differences are found, as might be expected for the given flow conditions. EPSM appears to be less sensitive to the sequence of random numbers used in the simulation than is DSMC and may also be more dissipative. The question of the origin and the magnitude of the dissipation in EPSM is addressed. It is suggested that this analysis is also relevant to DSMC when the usual accuracy requirements on the cell size and decoupling time step are relaxed in the interests of computational efficiency.

  3. Protein Microarrays with Novel Microfluidic Methods: Current Advances

    PubMed Central

    Dixit, Chandra K.; Aguirre, Gerson R.

    2014-01-01

    Microfluidic-based micromosaic technology has allowed the pattering of recognition elements in restricted micrometer scale areas with high precision. This controlled patterning enabled the development of highly multiplexed arrays multiple analyte detection. This arraying technology was first introduced in the beginning of 2001 and holds tremendous potential to revolutionize microarray development and analyte detection. Later, several microfluidic methods were developed for microarray application. In this review we discuss these novel methods and approaches which leverage the property of microfluidic technologies to significantly improve various physical aspects of microarray technology, such as enhanced imprinting homogeneity, stability of the immobilized biomolecules, decreasing assay times, and reduction of the costs and of the bulky instrumentation.

  4. Use of advanced particle methods in modeling space propulsion and its supersonic expansions

    NASA Astrophysics Data System (ADS)

    Borner, Arnaud

    This research discusses the use of advanced kinetic particle methods such as Molecular Dynamics (MD) and direct simulation Monte Carlo (DSMC) to model space propulsion systems such as electrospray thrusters and their supersonic expansions. MD simulations are performed to model an electrospray thruster for the ionic liquid (IL) EMIM--BF4 using coarse-grained (CG) potentials. The model is initially featuring a constant electric field applied in the longitudinal direction. Two coarse-grained potentials are compared, and the effective-force CG (EFCG) potential is found to predict the formation of the Taylor cone, the cone-jet, and other extrusion modes for similar electric fields and mass flow rates observed in experiments of a IL fed capillary-tip-extractor system better than the simple CG potential. Later, one-dimensional and fully transient three-dimensional electric fields, the latter solving Poisson's equation to take into account the electric field due to space charge at each timestep, are computed by coupling the MD model to a Poisson solver. It is found that the inhomogeneous electric field as well as that of the IL space-charge improve agreement between modeling and experiment. The boundary conditions (BCs) are found to have a substantial impact on the potential and electric field, and the tip BC is introduced and compared to the two previous BCs, named plate and needle, showing good improvement by reducing unrealistically high radial electric fields generated in the vicinity of the capillary tip. The influence of the different boundary condition models on charged species currents as a function of the mass flow rate is studied, and it is found that a constant electric field model gives similar agreement to the more rigorous and computationally expensive tip boundary condition at lower flow rates. However, at higher mass flow rates the MD simulations with the constant electric field produces extruded particles with higher Coulomb energy per ion, consistent with

  5. Multigrid Methods for Fully Implicit Oil Reservoir Simulation

    NASA Technical Reports Server (NTRS)

    Molenaar, J.

    1996-01-01

    In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for

  6. Application of advanced methods for the prognosis of production energy consumption

    NASA Astrophysics Data System (ADS)

    Stetter, R.; Witczak, P.; Staiger, B.; Spindler, C.; Hertel, J.

    2014-12-01

    This paper, based on a current research project, describes the application of advanced methods that are frequently used in fault-tolerance control and addresses the issue of the prognosis of energy efficiency. Today, the energy a product requires during its operation is the subject of many activities in research and development. However, the energy necessary for the production of goods is very often not analysed in comparable depth. In the field of electronics, studies come to the conclusion that about 80% of the total energy used by a product is from its production [1]. The energy consumption in production is determined very early in the product development process by designers and engineers, for example through selection of raw materials, explicit and implicit requirements concerning the manufacturing and assembly processes, or through decisions concerning the product architecture. Today, developers and engineers have at their disposal manifold design and simulation tools which can help to predict the energy consumption during operation relatively accurately. In contrast, tools with the objective to predict the energy consumption in production and disposal are not available. This paper aims to present an explorative study of the use of methods such as Fuzzy Logic to predict the production energy consumption early in the product development process.

  7. Recent advances in the modeling of plasmas with the Particle-In-Cell methods

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv

    2015-11-01

    The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.

  8. Advanced Methods for the Solution of Differential Equations.

    ERIC Educational Resources Information Center

    Goldstein, Marvin E.; Braun, Willis H.

    This is a textbook, originally developed for scientists and engineers, which stresses the actual solutions of practical problems. Theorems are precisely stated, but the proofs are generally omitted. Sample contents include first-order equations, equations in the complex plane, irregular singular points, and numerical methods. A more recent idea,…

  9. Origins, Methods and Advances in Qualitative Meta-Synthesis

    ERIC Educational Resources Information Center

    Nye, Elizabeth; Melendez-Torres, G. J.; Bonell, Chris

    2016-01-01

    Qualitative research is a broad term encompassing many methods. Critiques of the field of qualitative research argue that while individual studies provide rich descriptions and insights, the absence of connections drawn between studies limits their usefulness. In response, qualitative meta-synthesis serves as a design to interpret and synthesise…

  10. Quadrature Moments Method for the Simulation of Turbulent Reactive Flows

    NASA Technical Reports Server (NTRS)

    Raman, Venkatramanan; Pitsch, Heinz; Fox, Rodney O.

    2003-01-01

    A sub-filter model for reactive flows, namely the DQMOM model, was formulated for Large Eddy Simulation (LES) using the filtered mass density function. Transport equations required to determine the location and size of the delta-peaks were then formulated for a 2-peak decomposition of the FDF. The DQMOM scheme was implemented in an existing structured-grid LES solver. Simulations of scalar shear layer using an experimental configuration showed that the first and second moments of both reactive and inert scalars are in good agreement with a conventional Lagrangian scheme that evolves the same FDF. Comparisons with LES simulations performed using laminar chemistry assumption for the reactive scalar show that the new method provides vast improvements at minimal computational cost. Currently, the DQMOM model is being implemented for use with the progress variable/mixture fraction model of Pierce. Comparisons with experimental results and LES simulations using a single-environment for the progress-variable are planned. Future studies will aim at understanding the effect of increase in environments on predictions.

  11. A mixed RKPM/RBF immersed method for FSI simulations

    NASA Astrophysics Data System (ADS)

    Giometto, M.; Fang, J.; Putti, M.; Saetta, A.; Lanzoni, S.; Parlange, M. B.

    2012-04-01

    Simulating fluid-structure interaction still represents a challenging multiphysics application in the framework of Civil and Environmental Engineering. Techniques to couple in an efficient way the two continua have become more and more sophisticated and they all aim to find an efficient way to deal with the two different frames of reference and to avoid the need to re-mesh when the element aspect ratio has become unacceptable (e.g. large deformations). To overcome such problems we propose a mixed RKPM / RBF immersed method in which a Lagrangian meshless solid domain moves on top of a background Eulerian fluid mesh that spans over the entire computational domain. This method is similar to the original Immersed Boundary Method introduced by C.Peskin, except that the structure has the same spatial dimension of the fluid domain and therefore the effects of the fluid-embedded bodies are summarized into a volumetric source term. The governing equations for the viscous fluid are discretized and solved on a regular, Cartesian mesh using a pseudo-spectral approach, while the solid equations are solved by means of RKPM basis functions. The use of a Reproducing Kernel Particle Method for the solid domain enables us to easily handle large deformations without the typical mesh distortion of Finite-Element-Methods while still providing sufficient accuracy in the solution. At the moment we're validating the code by running simple-geometry, low-Reynolds DNS simulations; a further step will be including a subgrid-scale model in the fluid formulation and run simulations with turbulent flows at high-Reynolds numbers, to study the effects of flexible structures (e.g. trees, bridges, towers) on the Atmospheric Boundary Layer.

  12. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Gilinsky, Mikhail; Morgan, Morris H.; Povitsky, Alex; Schkolnikov, Natalia; Njoroge, Norman; Coston, Calvin; Blankson, Isaiah M.

    2001-01-01

    The Fluid Mechanics and Acoustics Laboratory at Hampton University (HU/FM&AL) jointly with the NASA Glenn Research Center has conducted four connected subprojects under the reporting project. Basically, the HU/FM&AL Team has been involved in joint research with the purpose of theoretical explanation of experimental facts and creation of accurate numerical simulation techniques and prediction theory for solution of current problems in propulsion systems of interest to the NAVY and NASA agencies. This work is also supported by joint research between the NASA GRC and the Institute of Mechanics at Moscow State University (IM/MSU) in Russia under a CRDF grant. The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines. The FM&AL Team uses analytical methods, numerical simulations and possible experimental tests at the Hampton University campus. The fundamental idea uniting these subprojects is to use nontraditional 3D corrugated and composite nozzle and inlet designs and additional methods for exhaust jet noise reduction without essential thrust loss and even with thrust augmentation. These subprojects are: (1) Aeroperformance and acoustics of Bluebell-shaped and Telescope-shaped designs; (2) An analysis of sharp-edged nozzle exit designs for effective fuel injection into the flow stream in air-breathing engines: triangular-round, diamond-round and other nozzles; (3) Measurement technique improvement for the HU Low Speed Wind Tunnel; a new course in the field of aerodynamics, teaching and training of HU students; experimental tests of Mobius-shaped screws: research and training; (4) Supersonic inlet shape optimization. The main outcomes during this reporting period are: (l) Publications: The AIAA Paper #00-3170 was presented at the 36th AIAA/ASME/SAE/ASEE Joint Propulsion Conference, 17-19 June, 2000, Huntsville, AL. The AIAA

  13. Innovative and Advanced Coupled Neutron Transport and Thermal Hydraulic Method (Tool) for the Design, Analysis and Optimization of VHTR/NGNP Prismatic Reactors

    SciTech Connect

    Rahnema, Farzad; Garimeela, Srinivas; Ougouag, Abderrafi; Zhang, Dingkang

    2013-11-29

    This project will develop a 3D, advanced coarse mesh transport method (COMET-Hex) for steady- state and transient analyses in advanced very high-temperature reactors (VHTRs). The project will lead to a coupled neutronics and thermal hydraulic (T/H) core simulation tool with fuel depletion capability. The computational tool will be developed in hexagonal geometry, based solely on transport theory without (spatial) homogenization in complicated 3D geometries. In addition to the hexagonal geometry extension, collaborators will concurrently develop three additional capabilities to increase the code’s versatility as an advanced and robust core simulator for VHTRs. First, the project team will develop and implement a depletion method within the core simulator. Second, the team will develop an elementary (proof-of-concept) 1D time-dependent transport method for efficient transient analyses. The third capability will be a thermal hydraulic method coupled to the neutronics transport module for VHTRs. Current advancements in reactor core design are pushing VHTRs toward greater core and fuel heterogeneity to pursue higher burn-ups, efficiently transmute used fuel, maximize energy production, and improve plant economics and safety. As a result, an accurate and efficient neutron transport, with capabilities to treat heterogeneous burnable poison effects, is highly desirable for predicting VHTR neutronics performance. This research project’s primary objective is to advance the state of the art for reactor analysis.

  14. Recent advances in high-performance modeling of plasma-based acceleration using the full PIC method

    NASA Astrophysics Data System (ADS)

    Vay, J.-L.; Lehe, R.; Vincenti, H.; Godfrey, B. B.; Haber, I.; Lee, P.

    2016-09-01

    Numerical simulations have been critical in the recent rapid developments of plasma-based acceleration concepts. Among the various available numerical techniques, the particle-in-cell (PIC) approach is the method of choice for self-consistent simulations from first principles. The fundamentals of the PIC method were established decades ago, but improvements or variations are continuously being proposed. We report on several recent advances in PIC-related algorithms that are of interest for application to plasma-based accelerators, including (a) detailed analysis of the numerical Cherenkov instability and its remediation for the modeling of plasma accelerators in laboratory and Lorentz boosted frames, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, and (c) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of perfectly matched layers in high-order and pseudo-spectral solvers.

  15. Protein Microarrays with Novel Microfluidic Methods: Current Advances

    PubMed Central

    Dixit, Chandra K.; Aguirre, Gerson R.

    2014-01-01

    Microfluidic-based micromosaic technology has allowed the pattering of recognition elements in restricted micrometer scale areas with high precision. This controlled patterning enabled the development of highly multiplexed arrays multiple analyte detection. This arraying technology was first introduced in the beginning of 2001 and holds tremendous potential to revolutionize microarray development and analyte detection. Later, several microfluidic methods were developed for microarray application. In this review we discuss these novel methods and approaches which leverage the property of microfluidic technologies to significantly improve various physical aspects of microarray technology, such as enhanced imprinting homogeneity, stability of the immobilized biomolecules, decreasing assay times, and reduction of the costs and of the bulky instrumentation. PMID:27600343

  16. Microcanonical ensemble simulation method applied to discrete potential fluids

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.

  17. Study on self-calibration angle encoder using simulation method

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Xue, Zi; Huang, Yao; Wang, Xiaona

    2016-01-01

    The angle measurement technology is very important in precision manufacture, optical industry, aerospace, aviation and navigation, etc. Further, the angle encoder, which uses concept `subdivision of full circle (2π rad=360°)' and transforms the angle into number of electronic pulse, is the most common instrument for angle measurement. To improve the accuracy of the angle encoder, a novel self-calibration method was proposed that enables the angle encoder to calibrate itself without angle reference. An angle deviation curve among 0° to 360° was simulated with equal weights Fourier components for the study of the self-calibration method. In addition, a self-calibration algorithm was used in the process of this deviation curve. The simulation result shows the relationship between the arrangement of multi-reading heads and the Fourier components distribution of angle encoder deviation curve. Besides, an actual self-calibration angle encoder was calibrated by polygon angle standard in national institute of metrology, China. The experiment result indicates the actual self-calibration effect on the Fourier components distribution of angle encoder deviation curve. In the end, the comparison, which is between the simulation self-calibration result and the experiment self-calibration result, reflects good consistency and proves the reliability of the self-calibration angle encoder.

  18. High Performance Parallel Methods for Space Weather Simulations

    NASA Technical Reports Server (NTRS)

    Hunter, Paul (Technical Monitor); Gombosi, Tamas I.

    2003-01-01

    This is the final report of our NASA AISRP grant entitled 'High Performance Parallel Methods for Space Weather Simulations'. The main thrust of the proposal was to achieve significant progress towards new high-performance methods which would greatly accelerate global MHD simulations and eventually make it possible to develop first-principles based space weather simulations which run much faster than real time. We are pleased to report that with the help of this award we made major progress in this direction and developed the first parallel implicit global MHD code with adaptive mesh refinement. The main limitation of all earlier global space physics MHD codes was the explicit time stepping algorithm. Explicit time steps are limited by the Courant-Friedrichs-Lewy (CFL) condition, which essentially ensures that no information travels more than a cell size during a time step. This condition represents a non-linear penalty for highly resolved calculations, since finer grid resolution (and consequently smaller computational cells) not only results in more computational cells, but also in smaller time steps.

  19. Advanced hybrid particulate collector and method of operation

    SciTech Connect

    Miller, Stanley J.

    2003-04-08

    A device and method for controlling particulate air pollutants of the present invention combines filtration and electrostatic collection devices. The invention includes a chamber housing a plurality of rows of filter elements. Between the rows of filter elements are rows of high voltage discharge electrodes. Between the rows of discharge electrodes and the rows of filter elements are grounded perforated plates for creating electrostatic precipitation zones.

  20. Advanced and In Situ Analytical Methods for Solar Fuel Materials.

    PubMed

    Chan, Candace K; Tüysüz, Harun; Braun, Artur; Ranjan, Chinmoy; La Mantia, Fabio; Miller, Benjamin K; Zhang, Liuxian; Crozier, Peter A; Haber, Joel A; Gregoire, John M; Park, Hyun S; Batchellor, Adam S; Trotochaud, Lena; Boettcher, Shannon W

    2016-01-01

    In situ and operando techniques can play important roles in the development of better performing photoelectrodes, photocatalysts, and electrocatalysts by helping to elucidate crucial intermediates and mechanistic steps. The development of high throughput screening methods has also accelerated the evaluation of relevant photoelectrochemical and electrochemical properties for new solar fuel materials. In this chapter, several in situ and high throughput characterization tools are discussed in detail along with their impact on our understanding of solar fuel materials.