Science.gov

Sample records for advanced simulation methods

  1. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  2. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  3. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGES

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  4. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  5. Investigation of advanced fault insertion and simulator methods

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.; Cottrell, D.

    1986-01-01

    The cooperative agreement partly supported research leading to the open-literature publication cited. Additional efforts under the agreement included research into fault modeling of semiconductor devices. Results of this research are presented in this report which is summarized in the following paragraphs. As a result of the cited research, it appears that semiconductor failure mechanism data is abundant but of little use in developing pin-level device models. Failure mode data on the other hand does exist but is too sparse to be of any statistical use in developing fault models. What is significant in the failure mode data is that, unlike classical logic, MSI and LSI devices do exhibit more than 'stuck-at' and open/short failure modes. Specifically they are dominated by parametric failures and functional anomalies that can include intermittent faults and multiple-pin failures. The report discusses methods of developing composite pin-level models based on extrapolation of semiconductor device failure mechanisms, failure modes, results of temperature stress testing and functional modeling. Limitations of this model particularly with regard to determination of fault detection coverage and latency time measurement are discussed. Indicated research directions are presented.

  6. Advanced Numerical methods for F. E. Simulation of Metal Forming Processes

    NASA Astrophysics Data System (ADS)

    Chenot, Jean-Loup; Bernacki, Marc; Fourment, Lionel; Ducloux, Richard

    2010-06-01

    The classical scientific basis for finite element modeling of metal forming processes is first recalled. Several developments in advanced topics are summarized: adaptive and anisotropic remeshing, parallel solving, multi material deformation. More recent researches in numerical analysis are outlined, including multi grid and multi mesh methods, mainly devoted to decrease computation time, automatic optimization method for faster and more effective design of forming processes. The link of forming simulation and structural computations is considered with emphasis on the necessity to predict the final mechanical properties. Finally a brief account of computation at the micro scale level is given.

  7. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  8. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods.

    PubMed

    Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C

    2010-12-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.

  9. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  10. A review of recent advances in the spherical harmonics expansion method for semiconductor device simulation.

    PubMed

    Rupp, K; Jungemann, C; Hong, S-M; Bina, M; Grasser, T; Jüngel, A

    The Boltzmann transport equation is commonly considered to be the best semi-classical description of carrier transport in semiconductors, providing precise information about the distribution of carriers with respect to time (one dimension), location (three dimensions), and momentum (three dimensions). However, numerical solutions for the seven-dimensional carrier distribution functions are very demanding. The most common solution approach is the stochastic Monte Carlo method, because the gigabytes of memory requirements of deterministic direct solution approaches has not been available until recently. As a remedy, the higher accuracy provided by solutions of the Boltzmann transport equation is often exchanged for lower computational expense by using simpler models based on macroscopic quantities such as carrier density and mean carrier velocity. Recent developments for the deterministic spherical harmonics expansion method have reduced the computational cost for solving the Boltzmann transport equation, enabling the computation of carrier distribution functions even for spatially three-dimensional device simulations within minutes to hours. We summarize recent progress for the spherical harmonics expansion method and show that small currents, reasonable execution times, and rare events such as low-frequency noise, which are all hard or even impossible to simulate with the established Monte Carlo method, can be handled in a straight-forward manner. The applicability of the method for important practical applications is demonstrated for noise simulation, small-signal analysis, hot-carrier degradation, and avalanche breakdown.

  11. Advanced methods in global gyrokinetic full f particle simulation of tokamak transport

    SciTech Connect

    Ogando, F.; Heikkinen, J. A.; Henriksson, S.; Janhunen, S. J.; Kiviniemi, T. P.; Leerink, S.

    2006-11-30

    A new full f nonlinear gyrokinetic simulation code, named ELMFIRE, has been developed for simulating transport phenomena in tokamak plasmas. The code is based on a gyrokinetic particle-in-cell algorithm, which can consider electrons and ions jointly or separately, as well as arbitrary impurities. The implicit treatment of the ion polarization drift and the use of full f methods allow for simulations of strongly perturbed plasmas including wide orbit effects, steep gradients and rapid dynamic changes. This article presents in more detail the algorithms incorporated into ELMFIRE, as well as benchmarking comparisons to both neoclassical theory and other codes.Code ELMFIRE calculates plasma dynamics by following the evolution of a number of sample particles. Because of using an stochastic algorithm its results are influenced by statistical noise. The effect of noise on relevant magnitudes is analyzed.Turbulence spectra of FT-2 plasma has been calculated with ELMFIRE, obtaining results consistent with experimental data.

  12. Advanced simulation of digital filters

    NASA Astrophysics Data System (ADS)

    Doyle, G. S.

    1980-09-01

    An Advanced Simulation of Digital Filters has been implemented on the IBM 360/67 computer utilizing Tektronix hardware and software. The program package is appropriate for use by persons beginning their study of digital signal processing or for filter analysis. The ASDF programs provide the user with an interactive method by which filter pole and zero locations can be manipulated. Graphical output on both the Tektronix graphics screen and the Versatec plotter are provided to observe the effects of pole-zero movement.

  13. Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations

    SciTech Connect

    Tokuhiro, Akiro; Ruggles, Art; Pointer, David

    2015-01-22

    In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is ‘actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a ‘separate effects (computational) test’ and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed

  14. Advanced electromagnetic gun simulation

    NASA Astrophysics Data System (ADS)

    Brown, J. L.; George, E. B.; Lippert, J. R.; Balius, A. R.

    1986-11-01

    The architecture, software and application of a simulation system for evaluating electromagnetic gun (EMG) operability, maintainability, test data and performance tradeoffs are described. The system features a generic preprocessor designed for handling the large data rates necessary for EMG simulations. The preprocessor and postprocessor operate independent of the EMG simulation, which is viewed through windows by the user, who can then select the areas of the simulation desired. The simulation considers a homopolar generator, busbars, pulse shaping coils, the barrel, switches, and prime movers. In particular, account is taken of barrel loading by the magnetic field, Lorentz force and plasma pressure.

  15. Advanced Spacecraft EM Modelling Based on Geometric Simplification Process and Multi-Methods Simulation

    NASA Astrophysics Data System (ADS)

    Leman, Samuel; Hoeppe, Frederic

    2016-05-01

    This paper is about the first results of a new generation of ElectroMagnetic (EM) methodology applied to spacecraft systems modelling in the low frequency range (system's dimensions are of the same order of magnitude as the wavelength).This innovative approach aims at implementing appropriate simplifications of the real system based on the identification of the dominant electrical and geometrical parameters driving the global EM behaviour. One rigorous but expensive simulation is performed to quantify the error generated by the use of simpler multi-models. If both the speed up of the simulation time and the quality of the EM response are satisfied, uncertainty simulation could be performed based on the simple models library implementing in a flexible and robust Kron's network formalism.This methodology is expected to open up new perspectives concerning fast parametric analysis, and deep understanding of systems behaviour. It will ensure the identification of main radiated and conducted coupling paths and the sensitive EM parameters in order to optimize the protections and to control the disturbance sources in spacecraft design phases.

  16. Advanced Usability Evaluation Methods

    DTIC Science & Technology

    2007-04-01

    tracking in usability evaluation : A practitioner’s guide. In J. Hyönä, R. Radach, & H. Deubel. (Eds.), The mind’s eye: Cognitive and applied...Advanced Usability Evaluation Methods Terence S. Andre, Lt Col, USAF Margaret Schurig, Human Factors Design Specialist, The Boeing Co...TITLE AND SUBTITLE Advanced Usability Evaluation Methods 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  17. Advanced simulation methods to detect resonant frequency stack up in focal plane design

    NASA Astrophysics Data System (ADS)

    Adams, Craig; Malone, Neil R.; Torres, Raymond; Fajardo, Armando; Vampola, John; Drechsler, William; Parlato, Russell; Cobb, Christopher; Randolph, Max; Chiourn, Surath; Swinehart, Robert

    2014-09-01

    Wire used to connect focal plane electrical connections to external electrical circuitry can be modeled using the length, diameter and loop height to determine the resonant frequency. The design of the adjacent electric board and mounting platform can also be analyzed. The combined resonant frequency analysis can then be used to decouple the different component resonant frequencies to eliminate the potential for metal fatigue in the wires. It is important to note that the nominal maximum stress values that cause metal fatigue can be much less than the ultimate tensile stress limit or the yield stress limit and are degraded further at resonant frequencies. It is critical that tests be done to qualify designs that are not easily simulated due to material property variation and complex structures. Sine wave vibration testing is a critical component of qualification vibration and provides the highest accuracy in determining the resonant frequencies which can be reduced or uncorrelated improving the structural performance of the focal plane assembly by small changes in design damping or modern space material selection. Vibration flow down from higher levels of assembly needs consideration for intermediary hardware, which may amplify or attenuate the full up system vibration profile. A simple pass through of vibration requirements may result in over test or missing amplified resonant frequencies that can cause system failure. Examples are shown of metal wire fatigue such as discoloration and microscopic cracks which are visible at the submicron level by the use of a scanning electron microscope. While it is important to model and test resonant frequencies the Focal plane must also be constrained such that Coefficient of Thermal expansion mismatches are allowed to move and not overstress the FPA.

  18. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  19. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  20. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  1. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  2. Advanced concepts flight simulation facility.

    PubMed

    Chappell, S L; Sexton, G A

    1986-12-01

    The cockpit environment is changing rapidly. New technology allows airborne computerised information, flight automation and data transfer with the ground. By 1995, not only will the pilot's task have changed, but also the tools for doing that task. To provide knowledge and direction for these changes, the National Aeronautics and Space Administration (NASA) and the Lockheed-Georgia Company have completed three identical Advanced Concepts Flight Simulation Facilities. Many advanced features have been incorporated into the simulators - e g, cathode ray tube (CRT) displays of flight and systems information operated via touch-screen or voice, print-outs of clearances, cockpit traffic displays, current databases containing navigational charts, weather and flight plan information, and fuel-efficient autopilot control from take-off to touchdown. More importantly, this cockpit is a versatile test bed for studying displays, controls, procedures and crew management in a full-mission context. The facility also has an air traffic control simulation, with radio and data communications, and an outside visual scene with variable weather conditions. These provide a veridical flight environment to evaluate accurately advanced concepts in flight stations.

  3. Recent advances in lattice Boltzmann methods

    SciTech Connect

    Chen, S.; Doolen, G.D.; He, X.; Nie, X.; Zhang, R.

    1998-12-31

    In this paper, the authors briefly present the basic principles of lattice Boltzmann method and summarize recent advances of the method, including the application of the lattice Boltzmann method for fluid flows in MEMS and simulation of the multiphase mixing and turbulence.

  4. Towards Direct Numerical Simulation of mass and energy fluxes at the soil-atmospheric interface with advanced Lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Krafczyk, Manfred; Geier, Martin; Schönherr, Martin

    2014-05-01

    The quantification of soil evaporation and of soil water content dynamics near the soil surface are critical in the physics of land-surface processes on many scales and are dominated by multi-component and multi-phase mass and energy fluxes between the ground and the atmosphere. Although it is widely recognized that both liquid and gaseous water movement are fundamental factors in the quantification of soil heat flux and surface evaporation, their computation has only started to be taken into account using simplified macroscopic models. As the flow field over the soil can be safely considered as turbulent, it would be natural to study the detailed transient flow dynamics by means of Large Eddy Simulation (LES [1]) where the three-dimensional flow field is resolved down to the laminar sub-layer. Yet this requires very fine resolved meshes allowing a grid resolution of at least one order of magnitude below the typical grain diameter of the soil under consideration. In order to gain reliable turbulence statistics, up to several hundred eddy turnover times have to be simulated which adds up to several seconds of real time. Yet, the time scale of the receding saturated water front dynamics in the soil is on the order of hours. Thus we are faced with the task of solving a transient turbulent flow problem including the advection-diffusion of water vapour over the soil-atmospheric interface represented by a realistic tomographic reconstruction of a real porous medium taken from laboratory probes. Our flow solver is based on the Lattice Boltzmann method (LBM) [2] which has been extended by a Cumulant approach similar to the one described in [3,4] to minimize the spurious coupling between the degrees of freedom in previous LBM approaches and can be used as an implicit LES turbulence model due to its low numerical dissipation and increased stability at high Reynolds numbers. The kernel has been integrated into the research code Virtualfluids [5] and delivers up to 30% of the

  5. Computational Aerodynamic Simulations of an 840 ft/sec Tip Speed Advanced Ducted Propulsor Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of an 840 ft/sec tip speed, Advanced Ducted Propulsor fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, lownoise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15- foot Low Speed Wind Tunnel at the NASA Glenn Research Center, resulting in quality, detailed aerodynamic and acoustic measurement data. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating conditions simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, excluding a long core duct section downstream of the core inlet guide vane. As a result, only fan rotational speed and system bypass ratio, set by specifying static pressure downstream of the core inlet guide vane row, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. The computed blade row flow fields for all five fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive boundary layer separations or related secondary-flow problems. A few spanwise comparisons between

  6. Advanced Space Shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1982-01-01

    A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.

  7. Advances in Adaptive Control Methods

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2009-01-01

    This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.

  8. Editorial: biotech methods and advances.

    PubMed

    Jungbauer, Alois

    2013-01-01

    This annual Methods and Advances Special Issue of Biotechnology Journal contains a selection of cutting-edge research and review articles with a particular emphasis on vertical process understanding – read more in this editorial by Prof. Alois Jungbauer, BTJ co-Editor-in-Chief.

  9. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... check airmen must include training policies and procedures, instruction methods and techniques... and a means for achieving flightcrew training in advanced airplane simulators. The requirements in... Simulation Training Program For an operator to conduct Level C or D training under this appendix all...

  10. Advanced Vadose Zone Simulations Using TOUGH

    SciTech Connect

    Finsterle, S.; Doughty, C.; Kowalsky, M.B.; Moridis, G.J.; Pan,L.; Xu, T.; Zhang, Y.; Pruess, K.

    2007-02-01

    The vadose zone can be characterized as a complex subsurfacesystem in which intricate physical and biogeochemical processes occur inresponse to a variety of natural forcings and human activities. Thismakes it difficult to describe, understand, and predict the behavior ofthis specific subsurface system. The TOUGH nonisothermal multiphase flowsimulators are well-suited to perform advanced vadose zone studies. Theconceptual models underlying the TOUGH simulators are capable ofrepresenting features specific to the vadose zone, and of addressing avariety of coupled phenomena. Moreover, the simulators are integratedinto software tools that enable advanced data analysis, optimization, andsystem-level modeling. We discuss fundamental and computationalchallenges in simulating vadose zone processes, review recent advances inmodeling such systems, and demonstrate some capabilities of the TOUGHsuite of codes using illustrative examples.

  11. Simulating protein dynamics: Novel methods and applications

    NASA Astrophysics Data System (ADS)

    Vishal, V.

    This Ph.D dissertation describes several methodological advances in molecular dynamics (MD) simulations. Methods like Markov State Models can be used effectively in combination with distributed computing to obtain long time scale behavior from an ensemble of short simulations. Advanced computing architectures like Graphics Processors can be used to greatly extend the scope of MD. Applications of MD techniques to problems like Alzheimer's Disease and fundamental questions in protein dynamics are described.

  12. Advances in atomic oxygen simulation

    NASA Technical Reports Server (NTRS)

    Froechtenigt, Joseph F.; Bareiss, Lyle E.

    1990-01-01

    Atomic oxygen (AO) present in the atmosphere at orbital altitudes of 200 to 700 km has been shown to degrade various exposed materials on Shuttle flights. The relative velocity of the AO with the spacecraft, together with the AO density, combine to yield an environment consisting of a 5 eV beam energy with a flux of 10(exp 14) to 10(exp 15) oxygen atoms/sq cm/s. An AO ion beam apparatus that produces flux levels and energy similar to that encountered by spacecraft in low Earth orbit (LEO) has been in existence since 1987. Test data was obtained from the interaction of the AO ion beam with materials used in space applications (carbon, silver, kapton) and with several special coatings of interest deposited on various surfaces. The ultimate design goal of the AO beam simulation device is to produce neutral AO at sufficient flux levels to replicate on-orbit conditions. A newly acquired mass spectrometer with energy discrimination has allowed 5 eV neutral oxygen atoms to be separated and detected from the background of thermal oxygen atoms of approx 0.2 eV. Neutralization of the AO ion beam at 5 eV was shown at the Martin Marietta AO facility.

  13. Center for Advanced Modeling and Simulation Intern

    SciTech Connect

    Gertman, Vanessa

    2010-01-01

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  14. Center for Advanced Modeling and Simulation Intern

    ScienceCinema

    Gertman, Vanessa

    2016-07-12

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  15. Simulation Credibility: Advances in Verification, Validation, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Editor); Eklund, Dean R.; Romero, Vicente J.; Pearce, Jeffrey A.; Keim, Nicholas S.

    2016-01-01

    Decision makers and other users of simulations need to know quantified simulation credibility to make simulation-based critical decisions and effectively use simulations, respectively. The credibility of a simulation is quantified by its accuracy in terms of uncertainty, and the responsibility of establishing credibility lies with the creator of the simulation. In this volume, we present some state-of-the-art philosophies, principles, and frameworks. The contributing authors involved in this publication have been dedicated to advancing simulation credibility. They detail and provide examples of key advances over the last 10 years in the processes used to quantify simulation credibility: verification, validation, and uncertainty quantification. The philosophies and assessment methods presented here are anticipated to be useful to other technical communities conducting continuum physics-based simulations; for example, issues related to the establishment of simulation credibility in the discipline of propulsion are discussed. We envision that simulation creators will find this volume very useful to guide and assist them in quantitatively conveying the credibility of their simulations.

  16. Advanced Civil Transport Simulator Cockpit View

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Advanced Civil Transport Simulator (ACTS) is a futuristic aircraft cockpit simulator designed to provide full-mission capabilities for researching issues that will affect future transport aircraft flight stations and crews. The objective is to heighten the pilots situation awareness through improved information availability and ease of interpretation in order to reduce the possibility of misinterpreted data. The simulators five 13-inch Cathode Ray Tubes are designed to display flight information in a logical easy-to-see format. Two color flat panel Control Display Units with touch sensitive screens provide monitoring and modification of aircraft parameters, flight plans, flight computers, and aircraft position. Three collimated visual display units have been installed to provide out-the-window scenes via the Computer Generated Image system. The major research objectives are to examine needs for transfer of information to and from the flight crew; study the use of advanced controls and displays for all-weather flying; explore ideas for using computers to help the crew in decision making; study visual scanning and reach behavior under different conditions with various levels of automation and flight deck-arrangements.

  17. Onyx-Advanced Aeropropulsion Simulation Framework Created

    NASA Technical Reports Server (NTRS)

    Reed, John A.

    2001-01-01

    The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.

  18. Novel methods for molecular dynamics simulations.

    PubMed

    Elber, R

    1996-04-01

    In the past year, significant progress was made in the development of molecular dynamics methods for the liquid phase and for biological macromolecules. Specifically, faster algorithms to pursue molecular dynamics simulations were introduced and advances were made in the design of new optimization algorithms guided by molecular dynamics protocols. A technique to calculate the quantum spectra of protein vibrations was introduced.

  19. Parallel methods for the flight simulation model

    SciTech Connect

    Xiong, Wei Zhong; Swietlik, C.

    1994-06-01

    The Advanced Computer Applications Center (ACAC) has been involved in evaluating advanced parallel architecture computers and the applicability of these machines to computer simulation models. The advanced systems investigated include parallel machines with shared. memory and distributed architectures consisting of an eight processor Alliant FX/8, a twenty four processor sor Sequent Symmetry, Cray XMP, IBM RISC 6000 model 550, and the Intel Touchstone eight processor Gamma and 512 processor Delta machines. Since parallelizing a truly efficient application program for the parallel machine is a difficult task, the implementation for these machines in a realistic setting has been largely overlooked. The ACAC has developed considerable expertise in optimizing and parallelizing application models on a collection of advanced multiprocessor systems. One of aspect of such an application model is the Flight Simulation Model, which used a set of differential equations to describe the flight characteristics of a launched missile by means of a trajectory. The Flight Simulation Model was written in the FORTRAN language with approximately 29,000 lines of source code. Depending on the number of trajectories, the computation can require several hours to full day of CPU time on DEC/VAX 8650 system. There is an impetus to reduce the execution time and utilize the advanced parallel architecture computing environment available. ACAC researchers developed a parallel method that allows the Flight Simulation Model to be able to run in parallel on the multiprocessor system. For the benchmark data tested, the parallel Flight Simulation Model implemented on the Alliant FX/8 has achieved nearly linear speedup. In this paper, we describe a parallel method for the Flight Simulation Model. We believe the method presented in this paper provides a general concept for the design of parallel applications. This concept, in most cases, can be adapted to many other sequential application programs.

  20. Software Framework for Advanced Power Plant Simulations

    SciTech Connect

    John Widmann; Sorin Munteanu; Aseem Jain; Pankaj Gupta; Mark Moales; Erik Ferguson; Lewis Collins; David Sloan; Woodrow Fiveland; Yi-dong Lang; Larry Biegler; Michael Locke; Simon Lingard; Jay Yun

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. These include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.

  1. Recent advances in superconducting-mixer simulations

    NASA Technical Reports Server (NTRS)

    Withington, S.; Kennedy, P. R.

    1992-01-01

    Over the last few years, considerable progress have been made in the development of techniques for fabricating high-quality superconducting circuits, and this success, together with major advances in the theoretical understanding of quantum detection and mixing at millimeter and submillimeter wavelengths, has made the development of CAD techniques for superconducting nonlinear circuits an important new enterprise. For example, arrays of quasioptical mixers are now being manufactured, where the antennas, matching networks, filters and superconducting tunnel junctions are all fabricated by depositing niobium and a variety of oxides on a single quartz substrate. There are no adjustable tuning elements on these integrated circuits, and therefore, one must be able to predict their electrical behavior precisely. This requirement, together with a general interest in the generic behavior of devices such as direct detectors and harmonic mixers, has lead us to develop a range of CAD tools for simulating the large-signal, small-signal, and noise behavior of superconducting tunnel junction circuits.

  2. Recent advances of strong-strong beam-beam simulation

    SciTech Connect

    Qiang, Ji; Furman, Miguel A.; Ryne, Robert D.; Fischer, Wolfram; Ohmi,Kazuhito

    2004-09-15

    In this paper, we report on recent advances in strong-strong beam-beam simulation. Numerical methods used in the calculation of the beam-beam forces are reviewed. A new computational method to solve the Poisson equation on nonuniform grid is presented. This method reduces the computational cost by a half compared with the standard FFT based method on uniform grid. It is also more accurate than the standard method for a colliding beam with low transverse aspect ratio. In applications, we present the study of coherent modes with multi-bunch, multi-collision beam-beam interactions at RHIC. We also present the strong-strong simulation of the luminosity evolution at KEKB with and without finite crossing angle.

  3. Interactive visualization to advance earthquake simulation

    USGS Publications Warehouse

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  4. An advanced Gibbs-Duhem integration method: theory and applications.

    PubMed

    van 't Hof, A; Peters, C J; de Leeuw, S W

    2006-02-07

    The conventional Gibbs-Duhem integration method is very convenient for the prediction of phase equilibria of both pure components and mixtures. However, it turns out to be inefficient. The method requires a number of lengthy simulations to predict the state conditions at which phase coexistence occurs. This number is not known from the outset of the numerical integration process. Furthermore, the molecular configurations generated during the simulations are merely used to predict the coexistence condition and not the liquid- and vapor-phase densities and mole fractions at coexistence. In this publication, an advanced Gibbs-Duhem integration method is presented that overcomes above-mentioned disadvantage and inefficiency. The advanced method is a combination of Gibbs-Duhem integration and multiple-histogram reweighting. Application of multiple-histogram reweighting enables the substitution of the unknown number of simulations by a fixed and predetermined number. The advanced method has a retroactive nature; a current simulation improves the predictions of previously computed coexistence points as well. The advanced Gibbs-Duhem integration method has been applied for the prediction of vapor-liquid equilibria of a number of binary mixtures. The method turned out to be very convenient, much faster than the conventional method, and provided smooth simulation results. As the employed force fields perfectly predict pure-component vapor-liquid equilibria, the binary simulations were very well suitable for testing the performance of different sets of combining rules. Employing Lorentz-Hudson-McCoubrey combining rules for interactions between unlike molecules, as opposed to Lorentz-Berthelot combining rules for all interactions, considerably improved the agreement between experimental and simulated data.

  5. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review

    NASA Technical Reports Server (NTRS)

    Antonsson, Erik; Gombosi, Tamas

    2005-01-01

    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  6. Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance

    PubMed Central

    Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W.

    2016-01-01

    An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller—advanced fuzzy potential field method (AFPFM)—that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot. PMID:27123001

  7. Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance.

    PubMed

    Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W

    2016-01-01

    An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller--advanced fuzzy potential field method (AFPFM)--that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot.

  8. Advanced method for oligonucleotide deprotection

    PubMed Central

    Surzhikov, Sergey A.; Timofeev, Edward N.; Chernov, Boris K.; Golova, Julia B.; Mirzabekov, Andrei D.

    2000-01-01

    A new procedure for rapid deprotection of synthetic oligodeoxynucleotides has been developed. While all known deprotection methods require purification to remove the residual protective groups (e.g. benzamide) and insoluble silicates, the new procedure based on the use of an ammonia-free reagent mixture allows one to avoid the additional purification steps. The method can be applied to deprotect the oligodeoxynucleotides synthesized by using the standard protected nucleoside phosphoramidites dGiBu, dCBz and dABz. PMID:10734206

  9. The Consortium for Advanced Simulation of Light Water Reactors

    SciTech Connect

    Ronaldo Szilard; Hongbin Zhang; Doug Kothe; Paul Turinsky

    2011-10-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is a DOE Energy Innovation Hub for modeling and simulation of nuclear reactors. It brings together an exceptionally capable team from national labs, industry and academia that will apply existing modeling and simulation capabilities and develop advanced capabilities to create a usable environment for predictive simulation of light water reactors (LWRs). This environment, designated as the Virtual Environment for Reactor Applications (VERA), will incorporate science-based models, state-of-the-art numerical methods, modern computational science and engineering practices, and uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs). It will couple state-of-the-art fuel performance, neutronics, thermal-hydraulics (T-H), and structural models with existing tools for systems and safety analysis and will be designed for implementation on both today's leadership-class computers and the advanced architecture platforms now under development by the DOE. CASL focuses on a set of challenge problems such as CRUD induced power shift and localized corrosion, grid-to-rod fretting fuel failures, pellet clad interaction, fuel assembly distortion, etc. that encompass the key phenomena limiting the performance of PWRs. It is expected that much of the capability developed will be applicable to other types of reactors. CASL's mission is to develop and apply modeling and simulation capabilities to address three critical areas of performance for nuclear power plants: (1) reduce capital and operating costs per unit energy by enabling power uprates and plant lifetime extension, (2) reduce nuclear waste volume generated by enabling higher fuel burnup, and (3) enhance nuclear safety by enabling high-fidelity predictive capability for component performance.

  10. Advanced Fine Particulate Characterization Methods

    SciTech Connect

    Steven Benson; Lingbu Kong; Alexander Azenkeng; Jason Laumb; Robert Jensen; Edwin Olson; Jill MacKenzie; A.M. Rokanuzzaman

    2007-01-31

    The characterization and control of emissions from combustion sources are of significant importance in improving local and regional air quality. Such emissions include fine particulate matter, organic carbon compounds, and NO{sub x} and SO{sub 2} gases, along with mercury and other toxic metals. This project involved four activities including Further Development of Analytical Techniques for PM{sub 10} and PM{sub 2.5} Characterization and Source Apportionment and Management, Organic Carbonaceous Particulate and Metal Speciation for Source Apportionment Studies, Quantum Modeling, and High-Potassium Carbon Production with Biomass-Coal Blending. The key accomplishments included the development of improved automated methods to characterize the inorganic and organic components particulate matter. The methods involved the use of scanning electron microscopy and x-ray microanalysis for the inorganic fraction and a combination of extractive methods combined with near-edge x-ray absorption fine structure to characterize the organic fraction. These methods have direction application for source apportionment studies of PM because they provide detailed inorganic analysis along with total organic and elemental carbon (OC/EC) quantification. Quantum modeling using density functional theory (DFT) calculations was used to further elucidate a recently developed mechanistic model for mercury speciation in coal combustion systems and interactions on activated carbon. Reaction energies, enthalpies, free energies and binding energies of Hg species to the prototype molecules were derived from the data obtained in these calculations. Bimolecular rate constants for the various elementary steps in the mechanism have been estimated using the hard-sphere collision theory approximation, and the results seem to indicate that extremely fast kinetics could be involved in these surface reactions. Activated carbon was produced from a blend of lignite coal from the Center Mine in North Dakota and

  11. Advanced Virtual Reality Simulations in Aerospace Education and Research

    NASA Astrophysics Data System (ADS)

    Plotnikova, L.; Trivailo, P.

    2002-01-01

    Recent research developments at Aerospace Engineering, RMIT University have demonstrated great potential for using Virtual Reality simulations as a very effective tool in advanced structures and dynamics applications. They have also been extremely successful in teaching of various undergraduate and postgraduate courses for presenting complex concepts in structural and dynamics designs. Characteristic examples are related to the classical orbital mechanics, spacecraft attitude and structural dynamics. Advanced simulations, reflecting current research by the authors, are mainly related to the implementation of various non-linear dynamic techniques, including using Kane's equations to study dynamics of space tethered satellite systems and the Co-rotational Finite Element method to study reconfigurable robotic systems undergoing large rotations and large translations. The current article will describe the numerical implementation of the modern methods of dynamics, and will concentrate on the post-processing stage of the dynamic simulations. Numerous examples of building Virtual Reality stand-alone animations, designed by the authors, will be discussed in detail. These virtual reality examples will include: The striking feature of the developed technology is the use of the standard mathematical packages, like MATLAB, as a post-processing tool to generate Virtual Reality Modelling Language files with brilliant interactive, graphics and audio effects. These stand-alone demonstration files can be run under Netscape or Microsoft Explorer and do not require MATLAB. Use of this technology enables scientists to easily share their results with colleagues using the Internet, contributing to the flexible learning development at schools and Universities.

  12. Precision Casting via Advanced Simulation and Manufacturing

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A two-year program was conducted to develop and commercially implement selected casting manufacturing technologies to enable significant reductions in the costs of castings, increase the complexity and dimensional accuracy of castings, and reduce the development times for delivery of high quality castings. The industry-led R&D project was cost shared with NASA's Aerospace Industry Technology Program (AITP). The Rocketdyne Division of Boeing North American, Inc. served as the team lead with participation from Lockheed Martin, Ford Motor Company, Howmet Corporation, PCC Airfoils, General Electric, UES, Inc., University of Alabama, Auburn University, Robinson, Inc., Aracor, and NASA-LeRC. The technical effort was organized into four distinct tasks. The accomplishments reported herein. Task 1.0 developed advanced simulation technology for core molding. Ford headed up this task. On this program, a specialized core machine was designed and built. Task 2.0 focused on intelligent process control for precision core molding. Howmet led this effort. The primary focus of these experimental efforts was to characterize the process parameters that have a strong impact on dimensional control issues of injection molded cores during their fabrication. Task 3.0 developed and applied rapid prototyping to produce near net shape castings. Rocketdyne was responsible for this task. CAD files were generated using reverse engineering, rapid prototype patterns were fabricated using SLS and SLA, and castings produced and evaluated. Task 4.0 was aimed at developing technology transfer. Rocketdyne coordinated this task. Casting related technology, explored and evaluated in the first three tasks of this program, was implemented into manufacturing processes.

  13. Emulation of an Advanced G-Seat on the Advanced Simulator for Pilot Training.

    DTIC Science & Technology

    1978-04-01

    ASPT ) which culminated in the emulation of an advanced approach to G-seat simulation. The development of the software, the design of the advanced seat...components, the implementation of the advanced design on the ASPT , and the results of the study are presented. (Author)

  14. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Bruhwiler, David L.; Cary, John R.; Cowan, Benjamin M.; Paul, Kevin; Mullowney, Paul J.; Messmer, Peter; Geddes, Cameron G. R.; Esarey, Eric; Cormier-Michel, Estelle; Leemans, Wim; Vay, Jean-Luc

    2009-01-22

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating >10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of {approx}2,000 as compared to standard particle-in-cell.

  15. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Paul, K.; Cary, J.R.; Cowan, B.; Bruhwiler, D.L.; Geddes, C.G.R.; Mullowney, P.J.; Messmer, P.; Esarey, E.; Cormier-Michel, E.; Leemans, W.P.; Vay, J.-L.

    2008-09-10

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating>10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of ~;;2,000 as compared to standard particle-in-cell.

  16. Advanced Source Deconvolution Methods for Compton Telescopes

    NASA Astrophysics Data System (ADS)

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a

  17. Advanced radioactive waste assay methods: Final report

    SciTech Connect

    Cline, J.E.; Robertson, D.E.; DeGroot, S.E.

    1987-11-01

    This report describes an evaluation of advanced methodologies for the radioassay of low power-plant low-level radioactive waste for compliance with the 10CFR61 classification rules. The project evaluated current assay practices in ten operating plants and identified areas where advanced methods would apply, studied two direct-assay methodologies, demonstrated these two techniques on radwaste in four operating plants and on irradiated components in two plants, and developed techniques for obtaining small representative aliquots from larger samples and for enhancing the /sup 144/Ce activity analysis in samples of waste. The study demonstrated the accuracy, practicality, and ALARA aspects of advanced methods and indicates that cost savings, resulting from the accuracy improvement and reduction in sampling requirements can be significant. 24 refs., 60 figs., 67 tabs.

  18. Advanced analysis methods in particle physics

    SciTech Connect

    Bhat, Pushpalatha C.; /Fermilab

    2010-10-01

    Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.

  19. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  20. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  1. Simulator design for advanced ISDN satellite design and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerald R.

    1992-01-01

    This simulation design task completion report documents the simulation techniques associated with the network models of both the Interim Service ISDN (integrated services digital network) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures. The ISIS network model design represents satellite systems like the Advanced Communication Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) program, moves all control and switching functions on-board the next generation ISDN communication satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete events simulation experiments will be performed with these models using various traffic scenarios, design parameters and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  2. Enhanced Capabilities of Advanced Airborne Radar Simulation.

    DTIC Science & Technology

    1996-01-01

    RCF UNIX-Based Machine 65 BAUHAUS A-l Illustrations to Understand How GTD Files are Read 78 C-l Input File for Sidelobe Jammer Nulling...on the UNIX-based machine BAUHAUS are provided to illustrate the enhancements in run time, as compared to the original version of the simulation [1...Figure 27 presents some CPU run times for executing the enhanced simulation on the RCF UNIX-based machine BAUHAUS . The run times are shown only for

  3. Predicting Performance in Technical Preclinical Dental Courses Using Advanced Simulation.

    PubMed

    Gottlieb, Riki; Baechle, Mary A; Janus, Charles; Lanning, Sharon K

    2017-01-01

    The aim of this study was to investigate whether advanced simulation parameters, such as simulation exam scores, number of student self-evaluations, time to complete the simulation, and time to complete self-evaluations, served as predictors of dental students' preclinical performance. Students from three consecutive classes (n=282) at one U.S. dental school completed advanced simulation training and exams within the first four months of their dental curriculum. The students then completed conventional preclinical instruction and exams in operative dentistry (OD) and fixed prosthodontics (FP) courses, taken during the first and second years of dental school, respectively. Two advanced simulation exam scores (ASES1 and ASES2) were tested as predictors of performance in the two preclinical courses based on final course grades. ASES1 and ASES2 were found to be predictors of OD and FP preclinical course grades. Other advanced simulation parameters were not significantly related to grades in the preclinical courses. These results highlight the value of an early psychomotor skills assessment in dentistry. Advanced simulation scores may allow early intervention in students' learning process and assist in efficient allocation of resources such as faculty coverage and tutor assignment.

  4. Molecular dynamics simulations: advances and applications

    PubMed Central

    Hospital, Adam; Goñi, Josep Ramon; Orozco, Modesto; Gelpí, Josep L

    2015-01-01

    Molecular dynamics simulations have evolved into a mature technique that can be used effectively to understand macromolecular structure-to-function relationships. Present simulation times are close to biologically relevant ones. Information gathered about the dynamic properties of macromolecules is rich enough to shift the usual paradigm of structural bioinformatics from studying single structures to analyze conformational ensembles. Here, we describe the foundations of molecular dynamics and the improvements made in the direction of getting such ensemble. Specific application of the technique to three main issues (allosteric regulation, docking, and structure refinement) is discussed. PMID:26604800

  5. Advanced Simulation and Computing Business Plan

    SciTech Connect

    Rummel, E.

    2015-07-09

    To maintain a credible nuclear weapons program, the National Nuclear Security Administration’s (NNSA’s) Office of Defense Programs (DP) needs to make certain that the capabilities, tools, and expert staff are in place and are able to deliver validated assessments. This requires a complete and robust simulation environment backed by an experimental program to test ASC Program models. This ASC Business Plan document encapsulates a complex set of elements, each of which is essential to the success of the simulation component of the Nuclear Security Enterprise. The ASC Business Plan addresses the hiring, mentoring, and retaining of programmatic technical staff responsible for building the simulation tools of the nuclear security complex. The ASC Business Plan describes how the ASC Program engages with industry partners—partners upon whom the ASC Program relies on for today’s and tomorrow’s high performance architectures. Each piece in this chain is essential to assure policymakers, who must make decisions based on the results of simulations, that they are receiving all the actionable information they need.

  6. Simulation Toolkit for Renewable Energy Advanced Materials Modeling

    SciTech Connect

    Sides, Scott; Kemper, Travis; Larsen, Ross; Graf, Peter

    2013-11-13

    STREAMM is a collection of python classes and scripts that enables and eases the setup of input files and configuration files for simulations of advanced energy materials. The core STREAMM python classes provide a general framework for storing, manipulating and analyzing atomic/molecular coordinates to be used in quantum chemistry and classical molecular dynamics simulations of soft materials systems. The design focuses on enabling the interoperability of materials simulation codes such as GROMACS, LAMMPS and Gaussian.

  7. Advances in NLTE Modeling for Integrated Simulations

    SciTech Connect

    Scott, H A; Hansen, S B

    2009-07-08

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, {Delta}n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  8. Editorial: Latest methods and advances in biotechnology.

    PubMed

    Lee, Sang Yup; Jungbauer, Alois

    2014-01-01

    The latest "Biotech Methods and Advances" special issue of Biotechnology Journal continues the BTJ tradition of featuring the latest breakthroughs in biotechnology. The special issue is edited by our Editors-in-Chief, Prof. Sang Yup Lee and Prof. Alois Jungbauer and covers a wide array of topics in biotechnology, including the perennial favorite workhorses of the biotech industry, Chinese hamster ovary (CHO) cell and Escherichia coli.

  9. Process simulation for advanced composites production

    SciTech Connect

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coating techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.

  10. Interoperable Technologies for Advanced Petascale Simulations

    SciTech Connect

    Li, Xiaolin

    2013-01-14

    Our final report on the accomplishments of ITAPS at Stony Brook during period covered by the research award includes component service, interface service and applications. On the component service, we have designed and implemented a robust functionality for the Lagrangian tracking of dynamic interface. We have migrated the hyperbolic, parabolic and elliptic solver from stage-wise second order toward global second order schemes. We have implemented high order coupling between interface propagation and interior PDE solvers. On the interface service, we have constructed the FronTier application programer's interface (API) and its manual page using doxygen. We installed the FronTier functional interface to conform with the ITAPS specifications, especially the iMesh and iMeshP interfaces. On applications, we have implemented deposition and dissolution models with flow and implemented the two-reactant model for a more realistic precipitation at the pore level and its coupling with Darcy level model. We have continued our support to the study of fluid mixing problem for problems in inertial comfinement fusion. We have continued our support to the MHD model and its application to plasma liner implosion in fusion confinement. We have simulated a step in the reprocessing and separation of spent fuels from nuclear power plant fuel rods. We have implemented the fluid-structure interaction for 3D windmill and parachute simulations. We have continued our collaboration with PNNL, BNL, LANL, ORNL, and other SciDAC institutions.

  11. Advanced Bayesian Method for Planetary Surface Navigation

    NASA Technical Reports Server (NTRS)

    Center, Julian

    2015-01-01

    Autonomous Exploration, Inc., has developed an advanced Bayesian statistical inference method that leverages current computing technology to produce a highly accurate surface navigation system. The method combines dense stereo vision and high-speed optical flow to implement visual odometry (VO) to track faster rover movements. The Bayesian VO technique improves performance by using all image information rather than corner features only. The method determines what can be learned from each image pixel and weighs the information accordingly. This capability improves performance in shadowed areas that yield only low-contrast images. The error characteristics of the visual processing are complementary to those of a low-cost inertial measurement unit (IMU), so the combination of the two capabilities provides highly accurate navigation. The method increases NASA mission productivity by enabling faster rover speed and accuracy. On Earth, the technology will permit operation of robots and autonomous vehicles in areas where the Global Positioning System (GPS) is degraded or unavailable.

  12. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; El-Sharawy, El-Budawy; Hashemi-Yeganeh, Shahrokh; Aberle, James T.; Birtcher, Craig R.

    1991-01-01

    The Advanced Helicopter Electromagnetics is centered on issues that advance technology related to helicopter electromagnetics. Progress was made on three major topics: composite materials; precipitation static corona discharge; and antenna technology. In composite materials, the research has focused on the measurements of their electrical properties, and the modeling of material discontinuities and their effect on the radiation pattern of antennas mounted on or near material surfaces. The electrical properties were used to model antenna performance when mounted on composite materials. Since helicopter platforms include several antenna systems at VHF and UHF bands, measuring techniques are being explored that can be used to measure the properties at these bands. The effort on corona discharge and precipitation static was directed toward the development of a new two dimensional Voltage Finite Difference Time Domain computer program. Results indicate the feasibility of using potentials for simulating electromagnetic problems in the cases where potentials become primary sources. In antenna technology the focus was on Polarization Diverse Conformal Microstrip Antennas, Cavity Backed Slot Antennas, and Varactor Tuned Circular Patch Antennas. Numerical codes were developed for the analysis of two probe fed rectangular and circular microstrip patch antennas fed by resistive and reactive power divider networks.

  13. Brush seal numerical simulation: Concepts and advances

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-01-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  14. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Pitsch, Heinz

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation; a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet transformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  15. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Heinz Pitsch

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high-fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation, a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet tranformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  16. Time parallelization of advanced operation scenario simulations of ITER plasma

    SciTech Connect

    Samaddar, D.; Casper, T. A.; Kim, S. H.; Berry, Lee A; Elwasif, Wael R; Batchelor, Donald B; Houlberg, Wayne A

    2013-01-01

    This work demonstrates that simulations of advanced burning plasma operation scenarios can be successfully parallelized in time using the parareal algorithm. CORSICA - an advanced operation scenario code for tokamak plasmas is used as a test case. This is a unique application since the parareal algorithm has so far been applied to relatively much simpler systems except for the case of turbulence. In the present application, a computational gain of an order of magnitude has been achieved which is extremely promising. A successful implementation of the Parareal algorithm to codes like CORSICA ushers in the possibility of time efficient simulations of ITER plasmas.

  17. Advanced Waveform Simulation for Seismic Monitoring

    DTIC Science & Technology

    2008-09-01

    velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radial components), Rayleigh (vertical and...ranges out to 10°, including extensive observations of crustal thinning and thickening and various Pnl complexities. Broadband modeling in 1D, 2D...existing models perform in predicting the various regional phases, Rayleigh waves, Love waves, and Pnl waves. Previous events from this Basin-and-Range

  18. The IDA Advanced Technology Combat Simulation Project

    DTIC Science & Technology

    1990-09-01

    Codes Dt Avail and/or r DtDDist Special4 A I I ! I I 5 PREFACE This paper was prepared as part of IDA Project 9000-623 under the IDA Central Research...Grotte, Ken Ratkiewicz , Phillip Merkey, Paul B. Schneck, Eleanor L. Schwartz, Shawn Sheridan, William Stoltz, Victor U.goff, Lowell Miller, Valyncia...benefit from the use of these methods. v HI I CONTENTS1 P R E F A C E

  19. An efficient time advancing strategy for energy-preserving simulations

    NASA Astrophysics Data System (ADS)

    Capuano, F.; Coppola, G.; de Luca, L.

    2015-08-01

    Energy-conserving numerical methods are widely employed within the broad area of convection-dominated systems. Semi-discrete conservation of energy is usually obtained by adopting the so-called skew-symmetric splitting of the non-linear convective term, defined as a suitable average of the divergence and advective forms. Although generally allowing global conservation of kinetic energy, it has the drawback of being roughly twice as expensive as standard divergence or advective forms alone. In this paper, a general theoretical framework has been developed to derive an efficient time-advancement strategy in the context of explicit Runge-Kutta schemes. The novel technique retains the conservation properties of skew-symmetric-based discretizations at a reduced computational cost. It is found that optimal energy conservation can be achieved by properly constructed Runge-Kutta methods in which only divergence and advective forms for the convective term are used. As a consequence, a considerable improvement in computational efficiency over existing practices is achieved. The overall procedure has proved to be able to produce new schemes with a specified order of accuracy on both solution and energy. The effectiveness of the method as well as the asymptotic behavior of the schemes is demonstrated by numerical simulation of Burgers' equation.

  20. Interoperable Technologies for Advanced Petascale Simulations (ITAPS)

    SciTech Connect

    Shephard, Mark S

    2010-02-05

    Efforts during the past year have contributed to the continued development of the ITAPS interfaces and services as well as specific efforts to support ITAPS applications. The ITAPS interface efforts have two components. The first is working with the ITAPS team on improving the ITAPS software infrastructure and level of compliance of our implementations of ITAPS interfaces (iMesh, iMeshP, iRel and iGeom). The second is being involved with the discussions on the design of the iField fields interface. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. The development of parallel unstructured mesh methods has considered the need to scale unstructured mesh solves to massively parallel computers. These efforts, summarized in section 2.1 show that with the addition of the ITAPS procedures described in sections 2.2 and 2.3 we are able to obtain excellent strong scaling with our unstructured mesh CFD code on up to 294,912 cores of IBM Blue Gene/P which is the highest core count machine available. The ITAPS developments that have contributed to the scaling and performance of PHASTA include an iterative migration algorithm to improve the combined region and vertex balance of the mesh partition, which increases scalability, and mesh data reordering, which improves computational performance. The other developments are associated with the further development of the ITAPS parallel unstructured mesh

  1. Advances in beryllium powder consolidation simulation

    SciTech Connect

    Reardon, B.J.

    1998-12-01

    A fuzzy logic based multiobjective genetic algorithm (GA) is introduced and the algorithm is used to optimize micromechanical densification modeling parameters for warm isopressed beryllium powder, HIPed copper powder and CIPed/sintered and HIPed tantalum powder. In addition to optimizing the main model parameters using the experimental data points as objective functions, the GA provides a quantitative measure of the sensitivity of the model to each parameter, estimates the mean particle size of the powder, and determines the smoothing factors for the transition between stage 1 and stage 2 densification. While the GA does not provide a sensitivity analysis in the strictest sense, and is highly stochastic in nature, this method is reliable and reproducible in optimizing parameters given any size data set and determining the impact on the model of slight variations in each parameter.

  2. Use advanced methods to treat wastewater

    SciTech Connect

    Davis, M. )

    1994-08-01

    Common sense guidelines offer plausible, progressive techniques to treat wastewater. Because current and pending local, state and federal regulations are ratcheting lower effluent discharge limits, familiar treatment methods, such as biological, don't meet new restrictions. Now operating facilities must combine traditional methods with advanced remedial options such as thermal, physical, electro and chemical treatments. these new techniques remove organics, metals, nonhazardous dissolved salts, etc., but carry higher operating and installation costs. Due to tighter effluent restrictions and pending zero-discharge initiatives, managers of operating facilities must know and understand the complexity, composition and contaminant concentration of their wastewaters. No one-size-fits-all solution exists. However, guidelines can simplify decision making and help operators nominate the most effective and economical strategy to handle their waste situation. The paper describes the common treatment and the importance of alternatives, then describes biological, electro, physical, thermal, and chemical treatments.

  3. Modeling and simulation challenges pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)

    NASA Astrophysics Data System (ADS)

    Turinsky, Paul J.; Kothe, Douglas B.

    2016-05-01

    The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics ;core simulator; based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M

  4. Implications of advanced collision operators for gyrokinetic simulation

    NASA Astrophysics Data System (ADS)

    Belli, E. A.; Candy, J.

    2017-04-01

    In this work, we explore both the potential improvements and pitfalls that arise when using advanced collision models in gyrokinetic simulations of plasma microinstabilities. Comparisons are made between the simple-but-standard electron Lorentz operator and specific variations of the advanced Sugama operator. The Sugama operator describes multi-species collisions including energy diffusion, momentum and energy conservation terms, and is valid for arbitrary wavelength. We report scans over collision frequency for both low and high {k}θ {ρ }s modes, with relevance for multiscale simulations that couple ion and electron scale physics. The influence of the ion–ion collision terms—not retained in the electron Lorentz model—on the damping of zonal flows is also explored. Collision frequency scans for linear and nonlinear simulations of ion-temperature-gradient instabilities including impurity ions are presented. Finally, implications for modeling turbulence in the highly collisional edge are discussed.

  5. Gasification CFD Modeling for Advanced Power Plant Simulations

    SciTech Connect

    Zitney, S.E.; Guenther, C.P.

    2005-09-01

    In this paper we have described recent progress on developing CFD models for two commercial-scale gasifiers, including a two-stage, coal slurry-fed, oxygen-blown, pressurized, entrained-flow gasifier and a scaled-up design of the PSDF transport gasifier. Also highlighted was NETL’s Advanced Process Engineering Co-Simulator for coupling high-fidelity equipment models with process simulation for the design, analysis, and optimization of advanced power plants. Using APECS, we have coupled the entrained-flow gasifier CFD model into a coal-fired, gasification-based FutureGen power and hydrogen production plant. The results for the FutureGen co-simulation illustrate how the APECS technology can help engineers better understand and optimize gasifier fluid dynamics and related phenomena that impact overall power plant performance.

  6. Advanced simulation model for IPM motor drive with considering phase voltage and stator inductance

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Myung; Park, Hyun-Jong; Lee, Ju

    2016-10-01

    This paper proposes an advanced simulation model of driving system for Interior Permanent Magnet (IPM) BrushLess Direct Current (BLDC) motors driven by 120-degree conduction method (two-phase conduction method, TPCM) that is widely used for sensorless control of BLDC motors. BLDC motors can be classified as SPM (Surface mounted Permanent Magnet) and IPM motors. Simulation model of driving system with SPM motors is simple due to the constant stator inductance regardless of the rotor position. Simulation models of SPM motor driving system have been proposed in many researches. On the other hand, simulation models for IPM driving system by graphic-based simulation tool such as Matlab/Simulink have not been proposed. Simulation study about driving system of IPMs with TPCM is complex because stator inductances of IPM vary with the rotor position, as permanent magnets are embedded in the rotor. To develop sensorless scheme or improve control performance, development of control algorithm through simulation study is essential, and the simulation model that accurately reflects the characteristic of IPM is required. Therefore, this paper presents the advanced simulation model of IPM driving system, which takes into account the unique characteristic of IPM due to the position-dependent inductances. The validity of the proposed simulation model is validated by comparison to experimental and simulation results using IPM with TPCM control scheme.

  7. Advanced Methods in Black-Hole Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Pani, Paolo

    2013-09-01

    Black-hole perturbation theory is a useful tool to investigate issues in astrophysics, high-energy physics, and fundamental problems in gravity. It is often complementary to fully-fledged nonlinear evolutions and instrumental to interpret some results of numerical simulations. Several modern applications require advanced tools to investigate the linear dynamics of generic small perturbations around stationary black holes. Here, we present an overview of these applications and introduce extensions of the standard semianalytical methods to construct and solve the linearized field equations in curved space-time. Current state-of-the-art techniques are pedagogically explained and exciting open problems are presented.

  8. Indentation Methods in Advanced Materials Research Introduction

    SciTech Connect

    Pharr, George Mathews; Cheng, Yang-Tse; Hutchings, Ian; Sakai, Mototsugu; Moody, Neville; Sundararajan, G.; Swain, Michael V.

    2009-01-01

    Since its commercialization early in the 20th century, indentation testing has played a key role in the development of new materials and understanding their mechanical behavior. Progr3ess in the field has relied on a close marriage between research in the mechanical behavior of materials and contact mechanics. The seminal work of Hertz laid the foundations for bringing these two together, with his contributions still widely utilized today in examining elastic behavior and the physics of fracture. Later, the pioneering work of Tabor, as published in his classic text 'The Hardness of Metals', exapdned this understanding to address the complexities of plasticity. Enormous progress in the field has been achieved in the last decade, made possible both by advances in instrumentation, for example, load and depth-sensing indentation and scanning electron microscopy (SEM) and transmission electron microscopy (TEM) based in situ testing, as well as improved modeling capabilities that use computationally intensive techniques such as finite element analysis and molecular dynamics simulation. The purpose of this special focus issue is to present recent state of the art developments in the field.

  9. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Lu, Lu; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-04-01

    Background: Advanced intercross lines (AIL) are segregating populations created using a multi-generation breeding protocol for fine mapping complex trait loci (QTL) in mice and other organisms. Applying QTL mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of AIL family structure in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with na ve mapping approaches in AIL populations is that the individual is not an exchangeable unit. Methodology/Principal Findings: The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. GRAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels, which are corrected using GRAIP. GRAIP also detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance: GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. The effect of

  10. Lessons Learned From Dynamic Simulations of Advanced Fuel Cycles

    SciTech Connect

    Steven J. Piet; Brent W. Dixon; Jacob J. Jacobson; Gretchen E. Matthern; David E. Shropshire

    2009-04-01

    Years of performing dynamic simulations of advanced nuclear fuel cycle options provide insights into how they could work and how one might transition from the current once-through fuel cycle. This paper summarizes those insights from the context of the 2005 objectives and goals of the Advanced Fuel Cycle Initiative (AFCI). Our intent is not to compare options, assess options versus those objectives and goals, nor recommend changes to those objectives and goals. Rather, we organize what we have learned from dynamic simulations in the context of the AFCI objectives for waste management, proliferation resistance, uranium utilization, and economics. Thus, we do not merely describe “lessons learned” from dynamic simulations but attempt to answer the “so what” question by using this context. The analyses have been performed using the Verifiable Fuel Cycle Simulation of Nuclear Fuel Cycle Dynamics (VISION). We observe that the 2005 objectives and goals do not address many of the inherently dynamic discriminators among advanced fuel cycle options and transitions thereof.

  11. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    SciTech Connect

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow in the wellbore

  12. Integration of Advanced Simulation and Visualization for Manufacturing Process Optimization

    NASA Astrophysics Data System (ADS)

    Zhou, Chenn; Wang, Jichao; Tang, Guangwu; Moreland, John; Fu, Dong; Wu, Bin

    2016-05-01

    The integration of simulation and visualization can provide a cost-effective tool for process optimization, design, scale-up and troubleshooting. The Center for Innovation through Visualization and Simulation (CIVS) at Purdue University Northwest has developed methodologies for such integration with applications in various manufacturing processes. The methodologies have proven to be useful for virtual design and virtual training to provide solutions addressing issues on energy, environment, productivity, safety, and quality in steel and other industries. In collaboration with its industrial partnerships, CIVS has provided solutions to companies, saving over US38 million. CIVS is currently working with the steel industry to establish an industry-led Steel Manufacturing Simulation and Visualization Consortium through the support of National Institute of Standards and Technology AMTech Planning Grant. The consortium focuses on supporting development and implementation of simulation and visualization technologies to advance steel manufacturing across the value chain.

  13. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    SciTech Connect

    McCoy, Michel; Archer, Bill; Hendrickson, Bruce; Wade, Doug; Hoang, Thuc

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  14. A stochastic model updating strategy-based improved response surface model and advanced Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun

    2017-01-01

    To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.

  15. Advanced fault diagnosis methods in molecular networks.

    PubMed

    Habibi, Iman; Emamian, Effat S; Abdi, Ali

    2014-01-01

    Analysis of the failure of cell signaling networks is an important topic in systems biology and has applications in target discovery and drug development. In this paper, some advanced methods for fault diagnosis in signaling networks are developed and then applied to a caspase network and an SHP2 network. The goal is to understand how, and to what extent, the dysfunction of molecules in a network contributes to the failure of the entire network. Network dysfunction (failure) is defined as failure to produce the expected outputs in response to the input signals. Vulnerability level of a molecule is defined as the probability of the network failure, when the molecule is dysfunctional. In this study, a method to calculate the vulnerability level of single molecules for different combinations of input signals is developed. Furthermore, a more complex yet biologically meaningful method for calculating the multi-fault vulnerability levels is suggested, in which two or more molecules are simultaneously dysfunctional. Finally, a method is developed for fault diagnosis of networks based on a ternary logic model, which considers three activity levels for a molecule instead of the previously published binary logic model, and provides equations for the vulnerabilities of molecules in a ternary framework. Multi-fault analysis shows that the pairs of molecules with high vulnerability typically include a highly vulnerable molecule identified by the single fault analysis. The ternary fault analysis for the caspase network shows that predictions obtained using the more complex ternary model are about the same as the predictions of the simpler binary approach. This study suggests that by increasing the number of activity levels the complexity of the model grows; however, the predictive power of the ternary model does not appear to be increased proportionally.

  16. Requirements for advanced simulation of nuclear reactor and chemicalseparation plants.

    SciTech Connect

    Palmiotti, G.; Cahalan, J.; Pfeiffer, P.; Sofu, T.; Taiwo, T.; Wei,T.; Yacout, A.; Yang, W.; Siegel, A.; Insepov, Z.; Anitescu, M.; Hovland,P.; Pereira, C.; Regalbuto, M.; Copple, J.; Willamson, M.

    2006-12-11

    This report presents requirements for advanced simulation of nuclear reactor and chemical processing plants that are of interest to the Global Nuclear Energy Partnership (GNEP) initiative. Justification for advanced simulation and some examples of grand challenges that will benefit from it are provided. An integrated software tool that has its main components, whenever possible based on first principles, is proposed as possible future approach for dealing with the complex problems linked to the simulation of nuclear reactor and chemical processing plants. The main benefits that are associated with a better integrated simulation have been identified as: a reduction of design margins, a decrease of the number of experiments in support of the design process, a shortening of the developmental design cycle, and a better understanding of the physical phenomena and the related underlying fundamental processes. For each component of the proposed integrated software tool, background information, functional requirements, current tools and approach, and proposed future approaches have been provided. Whenever possible, current uncertainties have been quoted and existing limitations have been presented. Desired target accuracies with associated benefits to the different aspects of the nuclear reactor and chemical processing plants were also given. In many cases the possible gains associated with a better simulation have been identified, quantified, and translated into economical benefits.

  17. Hybrid and electric advanced vehicle systems (heavy) simulation

    NASA Technical Reports Server (NTRS)

    Hammond, R. A.; Mcgehee, R. K.

    1981-01-01

    A computer program to simulate hybrid and electric advanced vehicle systems (HEAVY) is described. It is intended for use early in the design process: concept evaluation, alternative comparison, preliminary design, control and management strategy development, component sizing, and sensitivity studies. It allows the designer to quickly, conveniently, and economically predict the performance of a proposed drive train. The user defines the system to be simulated using a library of predefined component models that may be connected to represent a wide variety of propulsion systems. The development of three models are discussed as examples.

  18. Preface to advances in numerical simulation of plasmas

    NASA Astrophysics Data System (ADS)

    Parker, Scott E.; Chacon, Luis

    2016-10-01

    This Journal of Computational Physics Special Issue, titled "Advances in Numerical Simulation of Plasmas," presents a snapshot of the international state of the art in the field of computational plasma physics. The articles herein are a subset of the topics presented as invited talks at the 24th International Conference on the Numerical Simulation of Plasmas (ICNSP), August 12-14, 2015 in Golden, Colorado. The choice of papers was highly selective. The ICNSP is held every other year and is the premier scientific meeting in the field of computational plasma physics.

  19. Advances in Simulation of Wave Interaction with Extended MHD Phenomena

    SciTech Connect

    Batchelor, Donald B; Abla, Gheni; D'Azevedo, Ed F; Bateman, Glenn; Bernholdt, David E; Berry, Lee A; Bonoli, P.; Bramley, R; Breslau, Joshua; Chance, M.; Chen, J.; Choi, M.; Elwasif, Wael R; Foley, S.; Fu, GuoYong; Harvey, R. W.; Jaeger, Erwin Frederick; Jardin, S. C.; Jenkins, T; Keyes, David E; Klasky, Scott A; Kruger, Scott; Ku, Long-Poe; Lynch, Vickie E; McCune, Douglas; Ramos, J.; Schissel, D.; Schnack,; Wright, J.

    2009-01-01

    The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: 1) recent improvements to the IPS, 2) application of the IPS for very high resolution simulations of ITER scenarios, 3) studies of resistive and ideal MHD stability in tokamk discharges using IPS facilities, and 4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.

  20. Advances in Simulation of Wave Interactions with Extended MHD Phenomena

    SciTech Connect

    Batchelor, Donald B; D'Azevedo, Eduardo; Bateman, Glenn; Bernholdt, David E; Bonoli, P.; Bramley, Randall B; Breslau, Joshua; Elwasif, Wael R; Foley, S.; Jaeger, Erwin Frederick; Jardin, S. C.; Klasky, Scott A; Kruger, Scott E; Ku, Long-Poe; McCune, Douglas; Ramos, J.; Schissel, David P; Schnack, Dalton D

    2009-01-01

    The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: (1) recent improvements to the IPS, (2) application of the IPS for very high resolution simulations of ITER scenarios, (3) studies of resistive and ideal MHD stability in tokamak discharges using IPS facilities, and (4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.

  1. PIXE simulation: Models, methods and technologies

    SciTech Connect

    Batic, M.; Pia, M. G.; Saracco, P.; Weidenspointner, G.

    2013-04-19

    The simulation of PIXE (Particle Induced X-ray Emission) is discussed in the context of general-purpose Monte Carlo systems for particle transport. Dedicated PIXE codes are mainly concerned with the application of the technique to elemental analysis, but they lack the capability of dealing with complex experimental configurations. General-purpose Monte Carlo codes provide powerful tools to model the experimental environment in great detail, but so far they have provided limited functionality for PIXE simulation. This paper reviews recent developments that have endowed the Geant4 simulation toolkit with advanced capabilities for PIXE simulation, and related efforts for quantitative validation of cross sections and other physical parameters relevant to PIXE simulation.

  2. Advanced continuous cultivation methods for systems microbiology.

    PubMed

    Adamberg, Kaarel; Valgepea, Kaspar; Vilu, Raivo

    2015-09-01

    Increasing the throughput of systems biology-based experimental characterization of in silico-designed strains has great potential for accelerating the development of cell factories. For this, analysis of metabolism in the steady state is essential as only this enables the unequivocal definition of the physiological state of cells, which is needed for the complete description and in silico reconstruction of their phenotypes. In this review, we show that for a systems microbiology approach, high-resolution characterization of metabolism in the steady state--growth space analysis (GSA)--can be achieved by using advanced continuous cultivation methods termed changestats. In changestats, an environmental parameter is continuously changed at a constant rate within one experiment whilst maintaining cells in the physiological steady state similar to chemostats. This increases the resolution and throughput of GSA compared with chemostats, and, moreover, enables following of the dynamics of metabolism and detection of metabolic switch-points and optimal growth conditions. We also describe the concept, challenge and necessary criteria of the systematic analysis of steady-state metabolism. Finally, we propose that such systematic characterization of the steady-state growth space of cells using changestats has value not only for fundamental studies of metabolism, but also for systems biology-based metabolic engineering of cell factories.

  3. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Sun, Weimin; El-Sharawy, El-Budawy; Aberle, James T.; Birtcher, Craig R.; Peng, Jian; Tirkas, Panayiotis A.; Andrew, William V.; Kokotoff, David; Zavosh, Frank

    1993-01-01

    The Advanced Helicopter Electromagnetics (AHE) Industrial Associates Program has fruitfully completed its fourth year. Under the support of the AHE members and the joint effort of the research team, new and significant progress has been achieved in the year. Following the recommendations by the Advisory Task Force, the research effort is placed on more practical helicopter electromagnetic problems, such as HF antennas, composite materials, and antenna efficiencies. In this annual report, the main topics to be addressed include composite materials and antenna technology. The research work on each topic has been driven by the AHE consortium members' interests and needs. The remarkable achievements and progresses in each subject is reported respectively in individual sections of the report. The work in the area of composite materials includes: modeling of low conductivity composite materials by using Green's function approach; guidelines for composite material modeling by using the Green's function approach in the NEC code; development of 3-D volume mesh generator for modeling thick and volumetric dielectrics by using FD-TD method; modeling antenna elements mounted on a composite Comanche tail stabilizer; and antenna pattern control and efficiency estimate for a horn antenna loaded with composite dielectric materials.

  4. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Sun, Weimin; El-Sharawy, El-Budawy; Aberle, James T.; Birtcher, Craig R.; Peng, Jian; Tirkas, Panayiotis A.; Kokotoff, David; Zavosh, Frank

    1993-01-01

    The Advanced Helicopter Electromagnetics (AHE) Industrial Associates Program has continuously progressed with its research effort focused on subjects identified and recommended by the Advisory Task Force of the program. The research activities in this reporting period have been steered toward practical helicopter electromagnetic problems, such as HF antenna problems and antenna efficiencies, recommended by the AHE members at the annual conference held at Arizona State University on 28-29 Oct. 1992 and the last biannual meeting held at the Boeing Helicopter on 19-20 May 1993. The main topics addressed include the following: Composite Materials and Antenna Technology. The research work on each topic is closely tied with the AHE Consortium members' interests. Significant progress in each subject is reported. Special attention in the area of Composite Materials has been given to the following: modeling of material discontinuity and their effects on towel-bar antenna patterns; guidelines for composite material modeling by using the Green's function approach in the NEC code; measurements of towel-bar antennas grounded with a partially material-coated plate; development of 3-D volume mesh generator for modeling thick and volumetric dielectrics by using FD-TD method; FDTD modeling of horn antennas with composite E-plane walls; and antenna efficiency analysis for a horn antenna loaded with composite dielectric materials.

  5. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  6. Cost-efficiency assessment of Advanced Life Support (ALS) courses based on the comparison of advanced simulators with conventional manikins

    PubMed Central

    Iglesias-Vázquez, José Antonio; Rodríguez-Núñez, Antonio; Penas-Penas, Mónica; Sánchez-Santos, Luís; Cegarra-García, Maria; Barreiro-Díaz, Maria Victoria

    2007-01-01

    Background Simulation is an essential tool in modern medical education. The object of this study was to assess, in cost-effective measures, the introduction of new generation simulators in an adult life support (ALS) education program. Methods Two hundred fifty primary care physicians and nurses were admitted to ten ALS courses (25 students per course). Students were distributed at random in two groups (125 each). Group A candidates were trained and tested with standard ALS manikins and Group B ones with new generation emergency and life support integrated simulator systems. Results In group A, 98 (78%) candidates passed the course, compared with 110 (88%) in group B (p < 0.01). The total cost of conventional courses was €7689 per course and the cost of the advanced simulator courses was €29034 per course (p < 0.001). Cost per passed student was €392 in group A and €1320 in group B (p < 0.001). Conclusion Although ALS advanced simulator systems may slightly increase the rate of students who pass the course, the cost-effectiveness of ALS courses with standard manikins is clearly superior. PMID:17953771

  7. Modeling and simulation challenges pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)

    SciTech Connect

    Turinsky, Paul J.; Kothe, Douglas B.

    2016-05-15

    The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics “core simulator” based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M

  8. Advanced simulation study on bunch gap transient effect

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tetsuya; Akai, Kazunori

    2016-06-01

    Bunch phase shift along the train due to a bunch gap transient is a concern in high-current colliders. In KEKB operation, the measured phase shift along the train agreed well with a simulation and a simple analytical form in most part of the train. However, a rapid phase change was observed at the leading part of the train, which was not predicted by the simulation or by the analytical form. In order to understand the cause of this observation, we have developed an advanced simulation, which treats the transient loading in each of the cavities of the three-cavity system of the accelerator resonantly coupled with energy storage (ARES) instead of the equivalent single cavities used in the previous simulation, operating in the accelerating mode. In this paper, we show that the new simulation reproduces the observation, and clarify that the rapid phase change at the leading part of the train is caused by a transient loading in the three-cavity system of ARES. KEKB is being upgraded to SuperKEKB, which is aiming at 40 times higher luminosity than KEKB. The gap transient in SuperKEKB is investigated using the new simulation, and the result shows that the rapid phase change at the leading part of the train is much larger due to higher beam currents. We will also present measures to mitigate possible luminosity reduction or beam performance deterioration due to the rapid phase change caused by the gap transient.

  9. Simulation methods for looping transitions.

    PubMed

    Gaffney, B J; Silverstone, H J

    1998-09-01

    Looping transitions occur in field-swept electron magnetic resonance spectra near avoided crossings and involve a single pair of energy levels that are in resonance at two magnetic field strengths, before and after the avoided crossing. When the distance between the two resonances approaches a linewidth, the usual simulation of the spectra, which results from a linear approximation of the dependence of the transition frequency on magnetic field, breaks down. A cubic approximation to the transition frequency, which can be obtained from the two resonance fields and the field-derivatives of the transition frequencies, along with linear (or better) interpolation of the transition-probability factor, restores accurate simulation. The difference is crucial for accurate line shapes at fixed angles, as in an oriented single crystal, but the difference turns out to be a smaller change in relative intensity for a powder spectrum. Spin-3/2 Cr3+ in ruby and spin-5/2 Fe3+ in transferrin oxalate are treated as examples.

  10. Advanced Electromagnetic Methods for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Polycarpou, Anastasis; Birtcher, Craig R.; Georgakopoulos, Stavros; Han, Dong-Ho; Ballas, Gerasimos

    1999-01-01

    The imminent destructive threats of Lightning on helicopters and other airborne systems has always been a topic of great interest to this research grant. Previously, the lightning induced currents on the surface of the fuselage and its interior were predicted using the finite-difference time-domain (FDTD) method as well as the NEC code. The limitations of both methods, as applied to lightning, were identified and extensively discussed in the last meeting. After a thorough investigation of the capabilities of the FDTD, it was decided to incorporate into the numerical method a subcell model to accurately represent current diffusion through conducting materials of high conductivity and finite thickness. Because of the complexity of the model, its validity will be first tested for a one-dimensional FDTD problem. Although results are not available yet, the theory and formulation of the subcell model are presented and discussed here to a certain degree. Besides lightning induced currents in the interior of an aircraft, penetration of electromagnetic fields through apertures (e.g., windows and cracks) could also be devastating for the navigation equipment, electronics, and communications systems in general. The main focus of this study is understanding and quantifying field penetration through apertures. The simulation is done using the FDTD method and the predictions are compared with measurements and moment method solutions obtained from the NASA Langley Research Center. Cavity-backed slot (CBS) antennas or slot antennas in general have many applications in aircraft-satellite type of communications. These can be flushmounted on the surface of the fuselage and, therefore, they retain the aerodynamic shape of the aircraft. In the past, input impedance and radiation patterns of CBS antennas were computed using a hybrid FEM/MoM code. The analysis is now extended to coupling between two identical slot antennas mounted on the same structure. The predictions are performed

  11. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-01-01

    Abstract Background Advanced intercross lines (AIL) are segregating populations created using a multigeneration breeding protocol for fine mapping complex traits in mice and other organisms. Applying quantitative trait locus (QTL) mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of family structure in AIL populations in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with a na ve mapping approach in such AIL populations is that the individual is not an exchangeable unit given the family structure. Methodology/Principal Findings The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. RAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome- ide significance thresholds and locus-specific P-values for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels in our AIL population, which are corrected by use of GRAIP. We also show that GRAIP detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance GRAIP determines appropriate genome-wide significance thresholds

  12. Simulated herbivory advances autumn phenology in Acer rubrum.

    PubMed

    Forkner, Rebecca E

    2014-05-01

    To determine the degree to which herbivory contributes to phenotypic variation in autumn phenology for deciduous trees, red maple (Acer rubrum) branches were subjected to low and high levels of simulated herbivory and surveyed at the end of the season to assess abscission and degree of autumn coloration. Overall, branches with simulated herbivory abscised ∼7 % more leaves at each autumn survey date than did control branches within trees. While branches subjected to high levels of damage showed advanced phenology, abscission rates did not differ from those of undamaged branches within trees because heavy damage induced earlier leaf loss on adjacent branch nodes in this treatment. Damaged branches had greater proportions of leaf area colored than undamaged branches within trees, having twice the amount of leaf area colored at the onset of autumn and having ~16 % greater leaf area colored in late October when nearly all leaves were colored. When senescence was scored as the percent of all leaves abscised and/or colored, branches in both treatments reached peak senescence earlier than did control branches within trees: dates of 50 % senescence occurred 2.5 days earlier for low herbivory branches and 9.7 days earlier for branches with high levels of simulated damage. These advanced rates are of the same time length as reported delays in autumn senescence and advances in spring onset due to climate warming. Thus, results suggest that should insect damage increase as a consequence of climate change, it may offset a lengthening of leaf life spans in some tree species.

  13. The Advanced Gamma-ray Imaging System (AGIS): Simulation studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Funk, S.; Konopelko, A.; Vassiliev, V.V.; /UCLA

    2011-06-14

    The Advanced Gamma-ray Imaging System (AGIS) is a next-generation ground-based gamma-ray observatory being planned in the U.S. The anticipated sensitivity of AGIS is about one order of magnitude better than the sensitivity of current observatories, allowing it to measure gamma-ray emission from a large number of Galactic and extra-galactic sources. We present here results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance - collecting area, angular resolution, background rejection, and sensitivity - are discussed.

  14. The Advanced Gamma-ray Imaging System (AGIS) - Simulation Studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Vassiliev, V. V.; Funk, S.; Konopelko, A.

    2008-12-24

    The Advanced Gamma-ray Imaging System (AGIS) is a US-led concept for a next-generation instrument in ground-based very-high-energy gamma-ray astronomy. The most important design requirement for AGIS is a sensitivity of about 10 times greater than current observatories like Veritas, H.E.S.S or MAGIC. We present results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance, collecting area, angular resolution, background rejection, and sensitivity are discussed.

  15. EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation

    SciTech Connect

    Fan, Xuetong

    2000-08-20

    Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.

  16. Analysis of PV Advanced Inverter Functions and Setpoints under Time Series Simulation.

    SciTech Connect

    Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph; Grijalva, Santiago

    2016-05-01

    Utilities are increasingly concerned about the potential negative impacts distributed PV may have on the operational integrity of their distribution feeders. Some have proposed novel methods for controlling a PV system's grid - tie inverter to mitigate poten tial PV - induced problems. This report investigates the effectiveness of several of these PV advanced inverter controls on improving distribution feeder operational metrics. The controls are simulated on a large PV system interconnected at several locations within two realistic distribution feeder models. Due to the time - domain nature of the advanced inverter controls, quasi - static time series simulations are performed under one week of representative variable irradiance and load data for each feeder. A para metric study is performed on each control type to determine how well certain measurable network metrics improve as a function of the control parameters. This methodology is used to determine appropriate advanced inverter settings for each location on the f eeder and overall for any interconnection location on the feeder.

  17. Methods of sound simulation and applications in flight simulators

    NASA Technical Reports Server (NTRS)

    Gaertner, K. P.

    1980-01-01

    An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.

  18. Benchmarking of Advanced Control Strategies for a Simulated Hydroelectric System

    NASA Astrophysics Data System (ADS)

    Finotti, S.; Simani, S.; Alvisi, S.; Venturini, M.

    2017-01-01

    This paper analyses and develops the design of advanced control strategies for a typical hydroelectric plant during unsteady conditions, performed in the Matlab and Simulink environments. The hydraulic system consists of a high water head and a long penstock with upstream and downstream surge tanks, and is equipped with a Francis turbine. The nonlinear characteristics of hydraulic turbine and the inelastic water hammer effects were considered to calculate and simulate the hydraulic transients. With reference to the control solutions addressed in this work, the proposed methodologies rely on data-driven and model-based approaches applied to the system under monitoring. Extensive simulations and comparisons serve to determine the best solution for the development of the most effective, robust and reliable control tool when applied to the considered hydraulic system.

  19. Methods to Determine Recommended Feeder-Wide Advanced Inverter Settings for Improving Distribution System Performance

    SciTech Connect

    Rylander, Matthew; Reno, Matthew J.; Quiroz, Jimmy E.; Ding, Fei; Li, Huijuan; Broderick, Robert J.; Mather, Barry; Smith, Jeff

    2016-11-21

    This paper describes methods that a distribution engineer could use to determine advanced inverter settings to improve distribution system performance. These settings are for fixed power factor, volt-var, and volt-watt functionality. Depending on the level of detail that is desired, different methods are proposed to determine single settings applicable for all advanced inverters on a feeder or unique settings for each individual inverter. Seven distinctly different utility distribution feeders are analyzed to simulate the potential benefit in terms of hosting capacity, system losses, and reactive power attained with each method to determine the advanced inverter settings.

  20. Monte Carlo Simulations in Statistical Physics -- From Basic Principles to Advanced Applications

    NASA Astrophysics Data System (ADS)

    Janke, Wolfhard

    2013-08-01

    This chapter starts with an overview of Monte Carlo computer simulation methodologies which are illustrated for the simple case of the Ising model. After reviewing importance sampling schemes based on Markov chains and standard local update rules (Metropolis, Glauber, heat-bath), nonlocal cluster-update algorithms are explained which drastically reduce the problem of critical slowing down at second-order phase transitions and thus improve the performance of simulations. How this can be quantified is explained in the section on statistical error analyses of simulation data including the effect of temporal correlations and autocorrelation times. Histogram reweighting methods are explained in the next section. Eventually, more advanced generalized ensemble methods (simulated and parallel tempering, multicanonical ensemble, Wang-Landau method) are discussed which are particularly important for simulations of first-order phase transitions and, in general, of systems with rare-event states. The setup of scaling and finite-size scaling analyses is the content of the following section. The chapter concludes with two advanced applications to complex physical systems. The first example deals with a quenched, diluted ferromagnet, and in the second application we consider the adsorption properties of macromolecules such as polymers and proteins to solid substrates. Such systems often require especially tailored algorithms for their efficient and successful simulation.

  1. [The Study of Advanced Fundamental Parameter Method in EDXRFA].

    PubMed

    Cheng, Feng; Zhang, Qing-xian; Ge, Liang-quan; Gu, Yi; Zeng, Guo-qiang; Luo, Yao-yao; Chen, Shuang; Wang, Lei; Zhao, Jian-kun

    2015-07-01

    The X-ray Fluorescence Analysis(XRFA) is an important and efficient method on the element anylsis and is used in geology, industry and environment protection. But XRFA has a backdraw that the determination limit and accuracy are effected by the matrix of the sample. Now the fundamental parameter is usually used to calculate the content of elements in XRFA, and it is an efficient method if the matrix and net area of characteristic X-ray peak are obtained. But this is invalide in in-stu XRFA. Also the method of net area and the "black material" of sample are the key point of the fundamental parameter method when the Energy Dispersive X-ray Fluorescence Analysis(EDXRFA) method is used in the low content sample. In this paper a advanced fundamental parameter method is discussed. The advanced fundamental parameter method includes the spectra analysis and the fundamental parameter method, which inserts the overlapping peaks separation method into the iteration process of the fundamental parameter method. The advanced method can resolve the net area and the quantitative analysis. The advanced method is used to analyse the standard sample. Compare to the content obtained from the coefficient method, the precision of Cu, Ni and Zn is better than coeffieciency method. The result shows that the advanced method could improve the precision of the EDXRFA, so the advanced method is better than the coefficient method.

  2. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  3. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  4. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Astrophysics Data System (ADS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-12-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  5. Graphics simulation and training aids for advanced teleoperation

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1993-01-01

    Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.

  6. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Sun, Weimin; El-Sharawy, El-Budawy; Aberle, James T.; Birtcher, Craig R.; Peng, Jian; Tirkas, Panayiotis A.

    1992-01-01

    The Advanced Helicopter Electromagnetics (AHE) Industrial Associates Program continues its research on variety of main topics identified and recommended by the Advisory Task Force of the program. The research activities center on issues that advance technology related to helicopter electromagnetics. While most of the topics are a continuation of previous works, special effort has been focused on some of the areas due to recommendations from the last annual conference. The main topics addressed in this report are: composite materials, and antenna technology. The area of composite materials continues getting special attention in this period. The research has focused on: (1) measurements of the electrical properties of low-conductivity materials; (2) modeling of material discontinuity and their effects on the scattering patterns; (3) preliminary analysis on interaction of electromagnetic fields with multi-layered graphite fiberglass plates; and (4) finite difference time domain (FDTD) modeling of fields penetration through composite panels of a helicopter.

  7. Controlling template erosion with advanced cleaning methods

    NASA Astrophysics Data System (ADS)

    Singh, SherJang; Yu, Zhaoning; Wähler, Tobias; Kurataka, Nobuo; Gauzner, Gene; Wang, Hongying; Yang, Henry; Hsu, Yautzong; Lee, Kim; Kuo, David; Dress, Peter

    2012-03-01

    We studied the erosion and feature stability of fused silica patterns under different template cleaning conditions. The conventional SPM cleaning is compared with an advanced non-acid process. Spectroscopic ellipsometry optical critical dimension (SE-OCD) measurements were used to characterize the changes in pattern profile with good sensitivity. This study confirmed the erosion of the silica patterns in the traditional acid-based SPM cleaning mixture (H2SO4+H2O2) at a rate of ~0.1nm per cleaning cycle. The advanced non-acid clean process however only showed CD shift of ~0.01nm per clean. Contamination removal & pattern integrity of sensitive 20nm features under MegaSonic assisted cleaning is also demonstrated.

  8. Accelerated simulation methods for plasma kinetics

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel

    2016-11-01

    Collisional kinetics is a multiscale phenomenon due to the disparity between the continuum (fluid) and the collisional (particle) length scales. This paper describes a class of simulation methods for gases and plasmas, and acceleration techniques for improving their speed and accuracy. Starting from the Landau-Fokker-Planck equation for plasmas, the focus will be on a binary collision model that is solved using a Direct Simulation Monte Carlo (DSMC) method. Acceleration of this method is achieved by coupling the particle method to a continuum fluid description. The velocity distribution function f is represented as a combination of a Maxwellian M (the thermal component) and a set of discrete particles fp (the kinetic component). For systems that are close to (local) equilibrium, this reduces the number N of simulated particles that are required to represent f for a given level of accuracy. We present two methods for exploiting this representation. In the first method, equilibration of particles in fp, as well as disequilibration of particles from M, due to the collision process, is represented by a thermalization/dethermalization step that employs an entropy criterion. Efficiency of the representation is greatly increased by inclusion of particles with negative weights. This significantly complicates the simulation, but the second method is a tractable approach for negatively weighted particles. The accelerated simulation method is compared with standard PIC-DSMC method for both spatially homogeneous problems such as a bump-on-tail and inhomogeneous problems such as nonlinear Landau damping.

  9. A Virtual Engineering Framework for Simulating Advanced Power System

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Stanislav Borodai

    2008-06-18

    In this report is described the work effort performed to provide NETL with VE-Suite based Virtual Engineering software and enhanced equipment models to support NETL's Advanced Process Engineering Co-simulation (APECS) framework for advanced power generation systems. Enhancements to the software framework facilitated an important link between APECS and the virtual engineering capabilities provided by VE-Suite (e.g., equipment and process visualization, information assimilation). Model enhancements focused on improving predictions for the performance of entrained flow coal gasifiers and important auxiliary equipment (e.g., Air Separation Units) used in coal gasification systems. In addition, a Reduced Order Model generation tool and software to provide a coupling between APECS/AspenPlus and the GE GateCycle simulation system were developed. CAPE-Open model interfaces were employed where needed. The improved simulation capability is demonstrated on selected test problems. As part of the project an Advisory Panel was formed to provide guidance on the issues on which to focus the work effort. The Advisory Panel included experts from industry and academics in gasification, CO2 capture issues, process simulation and representatives from technology developers and the electric utility industry. To optimize the benefit to NETL, REI coordinated its efforts with NETL and NETL funded projects at Iowa State University, Carnegie Mellon University and ANSYS/Fluent, Inc. The improved simulation capabilities incorporated into APECS will enable researchers and engineers to better understand the interactions of different equipment components, identify weaknesses and processes needing improvement and thereby allow more efficient, less expensive plants to be developed and brought on-line faster and in a more cost-effective manner. These enhancements to APECS represent an important step toward having a fully integrated environment for performing plant simulation and engineering

  10. Advanced methods of structural and trajectory analysis for transport aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1995-01-01

    This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.

  11. Investigations and advanced concepts on gyrotron interaction modeling and simulations

    SciTech Connect

    Avramidis, K. A.

    2015-12-15

    In gyrotron theory, the interaction between the electron beam and the high frequency electromagnetic field is commonly modeled using the slow variables approach. The slow variables are quantities that vary slowly in time in comparison to the electron cyclotron frequency. They represent the electron momentum and the high frequency field of the resonant TE modes in the gyrotron cavity. For their definition, some reference frequencies need to be introduced. These include the so-called averaging frequency, used to define the slow variable corresponding to the electron momentum, and the carrier frequencies, used to define the slow variables corresponding to the field envelopes of the modes. From the mathematical point of view, the choice of the reference frequencies is, to some extent, arbitrary. However, from the numerical point of view, there are arguments that point toward specific choices, in the sense that these choices are advantageous in terms of simulation speed and accuracy. In this paper, the typical monochromatic gyrotron operation is considered, and the numerical integration of the interaction equations is performed by the trajectory approach, since it is the fastest, and therefore it is the one that is most commonly used. The influence of the choice of the reference frequencies on the interaction simulations is studied using theoretical arguments, as well as numerical simulations. From these investigations, appropriate choices for the values of the reference frequencies are identified. In addition, novel, advanced concepts for the definitions of these frequencies are addressed, and their benefits are demonstrated numerically.

  12. Advanced modeling and simulation to design and manufacture high performance and reliable advanced microelectronics and microsystems.

    SciTech Connect

    Nettleship, Ian (University of Pittsburgh, Pittsburgh, PA); Hinklin, Thomas; Holcomb, David Joseph; Tandon, Rajan; Arguello, Jose Guadalupe, Jr.; Dempsey, James Franklin; Ewsuk, Kevin Gregory; Neilsen, Michael K.; Lanagan, Michael (Pennsylvania State University, University Park, PA)

    2007-07-01

    An interdisciplinary team of scientists and engineers having broad expertise in materials processing and properties, materials characterization, and computational mechanics was assembled to develop science-based modeling/simulation technology to design and reproducibly manufacture high performance and reliable, complex microelectronics and microsystems. The team's efforts focused on defining and developing a science-based infrastructure to enable predictive compaction, sintering, stress, and thermomechanical modeling in ''real systems'', including: (1) developing techniques to and determining materials properties and constitutive behavior required for modeling; (2) developing new, improved/updated models and modeling capabilities, (3) ensuring that models are representative of the physical phenomena being simulated; and (4) assessing existing modeling capabilities to identify advances necessary to facilitate the practical application of Sandia's predictive modeling technology.

  13. Advance particle and Doppler measurement methods

    NASA Technical Reports Server (NTRS)

    Busch, C.

    1985-01-01

    Particle environments, i.e., rain, ice, and snow particles are discussed. Two types of particles addressed are: (1) the natural environment in which airplanes fly and conduct test flights; and (2) simulation environments that are encountered in ground-test facilities such as wind tunnels, ranges, etc. There are characteristics of the natural environment that one wishes to measure. The liquid water content (LWC) is the one that seems to be of most importance; size distribution may be of importance in some applications. Like snow, the shape of the particle may be an important parameter to measure. As one goes on to environment in simulated tests, additional parameters may be required such as velocity distribution, the velocity lag of the particle relative to the aerodynamic flow, and the trajectory of the particle as it goes through the aerodynamic flow and impacts on the test object.

  14. Bootstrapping Methods Applied for Simulating Laboratory Works

    ERIC Educational Resources Information Center

    Prodan, Augustin; Campean, Remus

    2005-01-01

    Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…

  15. Advanced Simulation Capability for Environmental Management (ASCEM): Early Site Demonstration

    SciTech Connect

    Meza, Juan; Hubbard, Susan; Freshley, Mark D.; Gorton, Ian; Moulton, David; Denham, Miles E.

    2011-03-07

    The U.S. Department of Energy Office of Environmental Management, Technology Innovation and Development (EM-32), is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high performance computing tool will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. As part of the initial development process, a series of demonstrations were defined to test ASCEM components and provide feedback to developers, engage end users in applications, and lead to an outcome that would benefit the sites. The demonstration was implemented for a sub-region of the Savannah River Site General Separations Area that includes the F-Area Seepage Basins. The physical domain included the unsaturated and saturated zones in the vicinity of the seepage basins and Fourmile Branch, using an unstructured mesh fit to the hydrostratigraphy and topography of the site. The calculations modeled variably saturated flow and the resulting flow field was used in simulations of the advection of non-reactive species and the reactive-transport of uranium. As part of the demonstrations, a new set of data management, visualization, and uncertainty quantification tools were developed to analyze simulation results and existing site data. These new tools can be used to provide summary statistics, including information on which simulation parameters were most important in the prediction of uncertainty and to visualize the relationships between model input and output.

  16. TID Simulation of Advanced CMOS Devices for Space Applications

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad

    2016-07-01

    This paper focuses on Total Ionizing Dose (TID) effects caused by accumulation of charges at silicon dioxide, substrate/silicon dioxide interface, Shallow Trench Isolation (STI) for scaled CMOS bulk devices as well as at Buried Oxide (BOX) layer in devices based on Silicon-On-Insulator (SOI) technology to be operated in space radiation environment. The radiation induced leakage current and corresponding density/concentration electrons in leakage current path was presented/depicted for 180nm, 130nm and 65nm NMOS, PMOS transistors based on CMOS bulk as well as SOI process technologies on-board LEO and GEO satellites. On the basis of simulation results, the TID robustness analysis for advanced deep sub-micron technologies was accomplished up to 500 Krad. The correlation between the impact of technology scaling and magnitude of leakage current with corresponding total dose was established utilizing Visual TCAD Genius program.

  17. Simulated Interactive Research Experiments as Educational Tools for Advanced Science.

    PubMed

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M; Hopf, Martin; Arndt, Markus

    2015-09-15

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields.

  18. Simulated Interactive Research Experiments as Educational Tools for Advanced Science

    PubMed Central

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M.; Hopf, Martin; Arndt, Markus

    2015-01-01

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields. PMID:26370627

  19. Advanced particulate matter control apparatus and methods

    DOEpatents

    Miller, Stanley J [Grand Forks, ND; Zhuang, Ye [Grand Forks, ND; Almlie, Jay C [East Grand Forks, MN

    2012-01-10

    Apparatus and methods for collection and removal of particulate matter, including fine particulate matter, from a gas stream, comprising a unique combination of high collection efficiency and ultralow pressure drop across the filter. The apparatus and method utilize simultaneous electrostatic precipitation and membrane filtration of a particular pore size, wherein electrostatic collection and filtration occur on the same surface.

  20. Spectral Methods in General Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Garrison, David

    2012-03-01

    In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.

  1. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  2. Rainfall Simulation: methods, research questions and challenges

    NASA Astrophysics Data System (ADS)

    Ries, J. B.; Iserloh, T.

    2012-04-01

    In erosion research, rainfall simulations are used for the improvement of process knowledge as well as in the field for the assessment of overland flow generation, infiltration, and erosion rates. In all these fields of research, rainfall experiments have become an indispensable part of the research methods. In this context, small portable rainfall simulators with small test-plot sizes of one square-meter or even less, and devices of low weight and water consumption are in demand. Accordingly, devices with manageable technical effort like nozzle-type simulators seem to prevail against larger simulators. The reasons are obvious: lower costs and less time consumption needed for mounting enable a higher repetition rate. Regarding the high number of research questions, of different fields of application, and not least also due to the great technical creativity of our research staff, a large number of different experimental setups is available. Each of the devices produces a different rainfall, leading to different kinetic energy amounts influencing the soil surface and accordingly, producing different erosion results. Hence, important questions contain the definition, the comparability, the measurement and the simulation of natural rainfall and the problem of comparability in general. Another important discussion topic will be the finding of an agreement on an appropriate calibration method for the simulated rainfalls, in order to enable a comparison of the results of different rainfall simulator set-ups. In most of the publications, only the following "nice" sentence can be read: "Our rainfall simulator generates a rainfall spectrum that is similar to natural rainfall!". The most substantial and critical properties of a simulated rainfall are the drop-size distribution, the fall velocities of the drops, and the spatial distribution of the rainfall on the plot-area. In a comparison of the most important methods, the Laser Distrometer turned out to be the most up

  3. Advancing MODFLOW Applying the Derived Vector Space Method

    NASA Astrophysics Data System (ADS)

    Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.

    2015-12-01

    The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  4. Advanced spectral methods for climatic time series

    USGS Publications Warehouse

    Ghil, M.; Allen, M.R.; Dettinger, M.D.; Ide, K.; Kondrashov, D.; Mann, M.E.; Robertson, A.W.; Saunders, A.; Tian, Y.; Varadi, F.; Yiou, P.

    2002-01-01

    The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this review we describe the connections between time series analysis and nonlinear dynamics, discuss signal- to-noise enhancement, and present some of the novel methods for spectral analysis. The various steps, as well as the advantages and disadvantages of these methods, are illustrated by their application to an important climatic time series, the Southern Oscillation Index. This index captures major features of interannual climate variability and is used extensively in its prediction. Regional and global sea surface temperature data sets are used to illustrate multivariate spectral methods. Open questions and further prospects conclude the review.

  5. Advancements in Afterbody Radiative Heating Simulations for Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Panesi, Marco; Brandis, Aaron M.

    2016-01-01

    Four advancements to the simulation of backshell radiative heating for Earth entry are presented. The first of these is the development of a flow field model that treats electronic levels of the dominant backshell radiator, N, as individual species. This is shown to allow improvements in the modeling of electron-ion recombination and two-temperature modeling, which are shown to increase backshell radiative heating by 10 to 40%. By computing the electronic state populations of N within the flow field solver, instead of through the quasi-steady state approximation in the radiation code, the coupling of radiative transition rates to the species continuity equations for the levels of N, including the impact of non-local absorption, becomes feasible. Implementation of this additional level of coupling between the flow field and radiation codes represents the second advancement presented in this work, which is shown to increase the backshell radiation by another 10 to 50%. The impact of radiative transition rates due to non-local absorption indicates the importance of accurate radiation transport in the relatively complex flow geometry of the backshell. This motivates the third advancement, which is the development of a ray-tracing radiation transport approach to compute the radiative transition rates and divergence of the radiative flux at every point for coupling to the flow field, therefore allowing the accuracy of the commonly applied tangent-slab approximation to be assessed for radiative source terms. For the sphere considered at lunar-return conditions, the tangent-slab approximation is shown to provide a sufficient level of accuracy for the radiative source terms, even for backshell cases. This is in contrast to the agreement between the two approaches for computing the radiative flux to the surface, which differ by up to 40%. The final advancement presented is the development of a nonequilibrium model for NO radiation, which provides significant backshell

  6. Reduced Basis Method for Nanodevices Simulation

    SciTech Connect

    Pau, George Shu Heng

    2008-05-23

    Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.

  7. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  8. A simulation method for the fruitage body

    NASA Astrophysics Data System (ADS)

    Lu, Ling; Song, Weng-lin; Wang, Lei

    2009-07-01

    An effective visual modeling for creating the fruitage body has been present. According to the geometry shape character of fruitage, we build up its face model base on ellipsoid deformation. The face model is relation with radius. We consider different radius become a face in the fruitage, and uses same method to simulate the shape of fruitage inside. The body model is formed by combine face model and radius direction. Our method can simulate virtual inter and outer structure for fruitage body. The method decreases a lot of data and increases display speed. Another, the texture model of fruitage is defined by sum of different base function. This kind of method is simple and speed. We show the feasibility of our method by creating a winter-jujube and an apricot. They include exocorp, mesocorp and endocarp. It is useful that develop virtual plant.

  9. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    SciTech Connect

    Xiu, Dongbin

    2016-06-21

    The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  10. Advances in Geometric Acoustic Propagation Modeling Methods

    NASA Astrophysics Data System (ADS)

    Blom, P. S.; Arrowsmith, S.

    2013-12-01

    Geometric acoustics provides an efficient numerical method to model propagation effects. At leading order, one can identify ensonified regions and calculate celerities of the predicted arrivals. Beyond leading order, the solution of the transport equation provides a means to estimate the amplitude of individual acoustic phases. The auxiliary parameters introduced in solving the transport equation have been found to provide a means of identifying ray paths connecting source and receiver, or eigenrays, for non-planar propagation. A detailed explanation of the eigenray method will be presented as well as an application to predicting azimuth deviations for infrasonic data recorded during the Humming Roadrunner experiment of 2012.

  11. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON.

    PubMed

    Lytton, William W; Seidenstein, Alexandra H; Dura-Bernal, Salvador; McDougal, Robert A; Schürmann, Felix; Hines, Michael L

    2016-10-01

    Large multiscale neuronal network simulations are of increasing value as more big data are gathered about brain wiring and organization under the auspices of a current major research initiative, such as Brain Research through Advancing Innovative Neurotechnologies. The development of these models requires new simulation technologies. We describe here the current use of the NEURON simulator with message passing interface (MPI) for simulation in the domain of moderately large networks on commonly available high-performance computers (HPCs). We discuss the basic layout of such simulations, including the methods of simulation setup, the run-time spike-passing paradigm, and postsimulation data storage and data management approaches. Using the Neuroscience Gateway, a portal for computational neuroscience that provides access to large HPCs, we benchmark simulations of neuronal networks of different sizes (500-100,000 cells), and using different numbers of nodes (1-256). We compare three types of networks, composed of either Izhikevich integrate-and-fire neurons (I&F), single-compartment Hodgkin-Huxley (HH) cells, or a hybrid network with half of each. Results show simulation run time increased approximately linearly with network size and decreased almost linearly with the number of nodes. Networks with I&F neurons were faster than HH networks, although differences were small since all tested cells were point neurons with a single compartment.

  12. Advanced method for making vitreous waste forms

    SciTech Connect

    Pope, J.M.; Harrison, D.E.

    1980-01-01

    A process is described for making waste glass that circumvents the problems of dissolving nuclear waste in molten glass at high temperatures. Because the reactive mixing process is independent of the inherent viscosity of the melt, any glass composition can be prepared with equal facility. Separation of the mixing and melting operations permits novel glass fabrication methods to be employed.

  13. Advanced methods in synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Kragh, Thomas

    2012-02-01

    For over 50 years our world has been mapped and measured with synthetic aperture radar (SAR). A SAR system operates by transmitting a series of wideband radio-frequency pulses towards the ground and recording the resulting backscattered electromagnetic waves as the system travels along some one-dimensional trajectory. By coherently processing the recorded backscatter over this extended aperture, one can form a high-resolution 2D intensity map of the ground reflectivity, which we call a SAR image. The trajectory, or synthetic aperture, is achieved by mounting the radar on an aircraft, spacecraft, or even on the roof of a car traveling down the road, and allows for a diverse set of applications and measurement techniques for remote sensing applications. It is quite remarkable that the sub-centimeter positioning precision and sub-nanosecond timing precision required to make this work properly can in fact be achieved under such real-world, often turbulent, vibrationally intensive conditions. Although the basic principles behind SAR imaging and interferometry have been known for decades, in recent years an explosion of data exploitation techniques enabled by ever-faster computational horsepower have enabled some remarkable advances. Although SAR images are often viewed as simple intensity maps of ground reflectivity, SAR is also an exquisitely sensitive coherent imaging modality with a wealth of information buried within the phase information in the image. Some of the examples featured in this presentation will include: (1) Interferometric SAR, where by comparing the difference in phase between two SAR images one can measure subtle changes in ground topography at the wavelength scale. (2) Change detection, in which carefully geolocated images formed from two different passes are compared. (3) Multi-pass 3D SAR tomography, where multiple trajectories can be used to form 3D images. (4) Moving Target Indication (MTI), in which Doppler effects allow one to detect and

  14. Advancing-layers method for generation of unstructured viscous grids

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1993-01-01

    A novel approach for generating highly stretched grids which is based on a modified advancing-front technique and benefits from the generality, flexibility, and grid quality of the conventional advancing-front-based Euler grid generators is presented. The method is self-sufficient for the insertion of grid points in the boundary layer and beyond. Since it is based on a totally unstructured grid strategy, the method alleviates the difficulties stemming from the structural limitations of the prismatic techniques.

  15. A Simulation Method Measuring Psychomotor Nursing Skills.

    ERIC Educational Resources Information Center

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  16. Method for Constructing Standardized Simulated Root Canals.

    ERIC Educational Resources Information Center

    Schulz-Bongert, Udo; Weine, Franklin S.

    1990-01-01

    The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)

  17. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  18. Advancements in Research Synthesis Methods: From a Methodologically Inclusive Perspective

    ERIC Educational Resources Information Center

    Suri, Harsh; Clarke, David

    2009-01-01

    The dominant literature on research synthesis methods has positivist and neo-positivist origins. In recent years, the landscape of research synthesis methods has changed rapidly to become inclusive. This article highlights methodologically inclusive advancements in research synthesis methods. Attention is drawn to insights from interpretive,…

  19. Current methods and advances in bone densitometry

    NASA Technical Reports Server (NTRS)

    Guglielmi, G.; Gluer, C. C.; Majumdar, S.; Blunt, B. A.; Genant, H. K.

    1995-01-01

    Bone mass is the primary, although not the only, determinant of fracture. Over the past few years a number of noninvasive techniques have been developed to more sensitively quantitate bone mass. These include single and dual photon absorptiometry (SPA and DPA), single and dual X-ray absorptiometry (SXA and DXA) and quantitative computed tomography (QCT). While differing in anatomic sites measured and in their estimates of precision, accuracy, and fracture discrimination, all of these methods provide clinically useful measurements of skeletal status. It is the intent of this review to discuss the pros and cons of these techniques and to present the new applications of ultrasound (US) and magnetic resonance (MRI) in the detection and management of osteoporosis.

  20. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  1. Overview of the Consortium for the Advanced Simulation of Light Water Reactors (CASL)

    NASA Astrophysics Data System (ADS)

    Kulesza, Joel A.; Franceschini, Fausto; Evans, Thomas M.; Gehin, Jess C.

    2016-02-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) was established in July 2010 for the purpose of providing advanced modeling and simulation solutions for commercial nuclear reactors. The primary goal is to provide coupled, higher-fidelity, usable modeling and simulation capabilities than are currently available. These are needed to address light water reactor (LWR) operational and safety performance-defining phenomena that are not yet able to be fully modeled taking a first-principles approach. In order to pursue these goals, CASL has participation from laboratory, academic, and industry partners. These partners are pursuing the solution of ten major "Challenge Problems" in order to advance the state-of-the-art in reactor design and analysis to permit power uprates, higher burnup, life extension, and increased safety. At present, the problems being addressed by CASL are primarily reactor physics-oriented; however, this paper is intended to introduce CASL to the reactor dosimetry community because of the importance of reactor physics modelling and nuclear data to define the source term for that community and the applicability and extensibility of the transport methods being developed.

  2. Mesoscopic Simulation Methods for Polymer Dynamics

    NASA Astrophysics Data System (ADS)

    Larson, Ronald

    2015-03-01

    We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.

  3. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1994-01-01

    NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.

  4. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  5. An Advanced Leakage Scheme for Neutrino Treatment in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Perego, A.; Cabezón, R. M.; Käppeli, R.

    2016-04-01

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.

  6. AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS

    SciTech Connect

    Perego, A.; Cabezón, R. M.; Käppeli, R.

    2016-04-15

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.

  7. [Objective surgery -- advanced robotic devices and simulators used for surgical skill assessment].

    PubMed

    Suhánszki, Norbert; Haidegger, Tamás

    2014-12-01

    Robotic assistance became a leading trend in minimally invasive surgery, which is based on the global success of laparoscopic surgery. Manual laparoscopy requires advanced skills and capabilities, which is acquired through tedious learning procedure, while da Vinci type surgical systems offer intuitive control and advanced ergonomics. Nevertheless, in either case, the key issue is to be able to assess objectively the surgeons' skills and capabilities. Robotic devices offer radically new way to collect data during surgical procedures, opening the space for new ways of skill parameterization. This may be revolutionary in MIS training, given the new and objective surgical curriculum and examination methods. The article reviews currently developed skill assessment techniques for robotic surgery and simulators, thoroughly inspecting their validation procedure and utility. In the coming years, these methods will become the mainstream of Western surgical education.

  8. Advanced flight deck/crew station simulator functional requirements

    NASA Technical Reports Server (NTRS)

    Wall, R. L.; Tate, J. L.; Moss, M. J.

    1980-01-01

    This report documents a study of flight deck/crew system research facility requirements for investigating issues involved with developing systems, and procedures for interfacing transport aircraft with air traffic control systems planned for 1985 to 2000. Crew system needs of NASA, the U.S. Air Force, and industry were investigated and reported. A matrix of these is included, as are recommended functional requirements and design criteria for simulation facilities in which to conduct this research. Methods of exploiting the commonality and similarity in facilities are identified, and plans for exploiting this in order to reduce implementation costs and allow efficient transfer of experiments from one facility to another are presented.

  9. Large eddy simulation of unsteady wind farm behavior using advanced actuator disk models

    NASA Astrophysics Data System (ADS)

    Moens, Maud; Duponcheel, Matthieu; Winckelmans, Gregoire; Chatelain, Philippe

    2014-11-01

    The present project aims at improving the level of fidelity of unsteady wind farm scale simulations through an effort on the representation and the modeling of the rotors. The chosen tool for the simulations is a Fourth Order Finite Difference code, developed at Universite catholique de Louvain; this solver implements Large Eddy Simulation (LES) approaches. The wind turbines are modeled as advanced actuator disks: these disks are coupled with the Blade Element Momentum method (BEM method) and also take into account the turbine dynamics and controller. A special effort is made here to reproduce the specific wake behaviors. Wake decay and expansion are indeed initially governed by vortex instabilities. This is an information that cannot be obtained from the BEM calculations. We thus aim at achieving this by matching the large scales of the actuator disk flow to high fidelity wake simulations produced using a Vortex Particle-Mesh method. It is obtained by adding a controlled excitation at the disk. We apply this tool to the investigation of atmospheric turbulence effects on the power production and on the wake behavior at a wind farm level. A turbulent velocity field is then used as inflow boundary condition for the simulations. We gratefully acknowledge the support of GDF Suez for the fellowship of Mrs Maud Moens.

  10. Simulation of an advanced techniques of ion propulsion Rocket system

    NASA Astrophysics Data System (ADS)

    Bakkiyaraj, R.

    2016-07-01

    The ion propulsion rocket system is expected to become popular with the development of Deuterium,Argon gas and Hexagonal shape Magneto hydrodynamic(MHD) techniques because of the stimulation indirectly generated the power from ionization chamber,design of thrust range is 1.2 N with 40 KW of electric power and high efficiency.The proposed work is the study of MHD power generation through ionization level of Deuterium gas and combination of two gaseous ions(Deuterium gas ions + Argon gas ions) at acceleration stage.IPR consists of three parts 1.Hexagonal shape MHD based power generator through ionization chamber 2.ion accelerator 3.Exhaust of Nozzle.Initially the required energy around 1312 KJ/mol is carrying out the purpose of deuterium gas which is changed to ionization level.The ionized Deuterium gas comes out from RF ionization chamber to nozzle through MHD generator with enhanced velocity then after voltage is generated across the two pairs of electrode in MHD.it will produce thrust value with the help of mixing of Deuterium ion and Argon ion at acceleration position.The simulation of the IPR system has been carried out by MATLAB.By comparing the simulation results with the theoretical and previous results,if reaches that the proposed method is achieved of thrust value with 40KW power for simulating the IPR system.

  11. Advanced wellbore thermal simulator GEOTEMP2 research report

    SciTech Connect

    Mitchell, R.F.

    1982-02-01

    The development of the GEOTEMP2 wellbore thermal simulator is described. The major technical features include a general purpose air and mist drilling simulator and a two-phase steam flow simulator that can model either injection or production.

  12. Unstructured viscous grid generation by advancing-front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1993-01-01

    A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.

  13. Application of Advanced Methods to Predict Grid to Rod Fretting in PWRs

    SciTech Connect

    Karoutas, Zeses; Roger, Lu Y.; Yan, J.; Krammen, M.A.; Sham, Sam

    2012-01-01

    Advanced modeling and simulation methods are being developed as part of the US Department of Energy sponsored Nuclear Energy Modeling and Simulation Hub called CASL (Consortium for Advanced Simulation of LWRs). The key participants of the CASL team include Oak Ridge National Laboratory (lead), Idaho National Laboratory, Sandia National Laboratories, Los Alamos National Laboratory, Massachusetts Institute of Technology, North Carolina State University, University of Michigan, Electric Power Research Institute, Tennessee Valley Authority and Westinghouse Electric Corporation. One of the key objectives of the CASL program is to develop multi-physics methods and tools which evaluate neutronic, thermal-hydraulic, structural mechanics and nuclear fuel rod performance in rod bundles to support power uprates, increased burnup/cycle length and life extension for US nuclear plants.

  14. Advanced Ablative Insulators and Methods of Making Them

    NASA Technical Reports Server (NTRS)

    Congdon, William M.

    2005-01-01

    Advanced ablative (more specifically, charring) materials that provide temporary protection against high temperatures, and advanced methods of designing and manufacturing insulators based on these materials, are undergoing development. These materials and methods were conceived in an effort to replace the traditional thermal-protection systems (TPSs) of re-entry spacecraft with robust, lightweight, better-performing TPSs that can be designed and manufactured more rapidly and at lower cost. These materials and methods could also be used to make improved TPSs for general aerospace, military, and industrial applications.

  15. Physalis: a New Method for Particle Simulations

    NASA Astrophysics Data System (ADS)

    Takagi, Shu; Oguz, Hasan; Prosperetti, Andrea

    2000-11-01

    A new computational method for the full Navier-Stokes viscous flow past cylinders and spheres is described and illustrated with preliminary results. Since, in the rest frame, the velocity vanishes on the particle, the Stokes equations apply in the immediate neighborhood of the surface. The analytic solutions of these equations available for both spheres and cylinders permit to effectively remove the particle, the effect of which is replaced by a consistency condition on the nodes of the computational grid that surround the particle. This condition is satisfied iteratively by a method that solves the field equations over the entire computational domain disregarding the presence of the particles, so that fast solvers can be used. The procedure eliminates the geometrical complexity of multi-particle simulations and permits to simulate disperse flows containing a large number of particles with a moderate computatonal cost. Supported by DOE and Japanese MESSC.

  16. Multigrid methods with applications to reservoir simulation

    SciTech Connect

    Xiao, Shengyou

    1994-05-01

    Multigrid methods are studied for solving elliptic partial differential equations. Focus is on parallel multigrid methods and their use for reservoir simulation. Multicolor Fourier analysis is used to analyze the behavior of standard multigrid methods for problems in one and two dimensions. Relation between multicolor and standard Fourier analysis is established. Multiple coarse grid methods for solving model problems in 1 and 2 dimensions are considered; at each coarse grid level we use more than one coarse grid to improve convergence. For a given Dirichlet problem, a related extended problem is first constructed; a purification procedure can be used to obtain Moore-Penrose solutions of the singular systems encountered. For solving anisotropic equations, semicoarsening and line smoothing techniques are used with multiple coarse grid methods to improve convergence. Two-level convergence factors are estimated using multicolor. In the case where each operator has the same stencil on each grid point on one level, exact multilevel convergence factors can be obtained. For solving partial differential equations with discontinuous coefficients, interpolation and restriction operators should include information about the equation coefficients. Matrix-dependent interpolation and restriction operators based on the Schur complement can be used in nonsymmetric cases. A semicoarsening multigrid solver with these operators is used in UTCOMP, a 3-D, multiphase, multicomponent, compositional reservoir simulator. The numerical experiments are carried out on different computing systems. Results indicate that the multigrid methods are promising.

  17. Twitter's tweet method modelling and simulation

    NASA Astrophysics Data System (ADS)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  18. Decision-Theoretic Methods in Simulation Optimization

    DTIC Science & Technology

    2014-09-24

    Materiel Command REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is...Alamos National Lab: Frazier visited LANL , hosted by Frank Alexander, in January 2013, where he discussed the use of simulation optimization methods for...Alexander, Turab Lookman, and others from LANL , at the Materials Informatics Workshop at the Sante Fe Institute in April 2013. In February 2014, Frazier

  19. Strategy to Promote Active Learning of an Advanced Research Method

    ERIC Educational Resources Information Center

    McDermott, Hilary J.; Dovey, Terence M.

    2013-01-01

    Research methods courses aim to equip students with the knowledge and skills required for research yet seldom include practical aspects of assessment. This reflective practitioner report describes and evaluates an innovative approach to teaching and assessing advanced qualitative research methods to final-year psychology undergraduate students. An…

  20. Advanced Simulation Capability for Environmental Management (ASCEM) Phase II Demonstration

    SciTech Connect

    Freshley, M.; Hubbard, S.; Flach, G.; Freedman, V.; Agarwal, D.; Andre, B.; Bott, Y.; Chen, X.; Davis, J.; Faybishenko, B.; Gorton, I.; Murray, C.; Moulton, D.; Meyer, J.; Rockhold, M.; Shoshani, A.; Steefel, C.; Wainwright, H.; Waichler, S.

    2012-09-28

    In 2009, the National Academies of Science (NAS) reviewed and validated the U.S. Department of Energy Office of Environmental Management (EM) Technology Program in its publication, Advice on the Department of Energy’s Cleanup Technology Roadmap: Gaps and Bridges. The NAS report outlined prioritization needs for the Groundwater and Soil Remediation Roadmap, concluded that contaminant behavior in the subsurface is poorly understood, and recommended further research in this area as a high priority. To address this NAS concern, the EM Office of Site Restoration began supporting the development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific approach that uses an integration of toolsets for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM modeling toolset is modular and open source. It is divided into three thrust areas: Multi-Process High Performance Computing (HPC), Platform and Integrated Toolsets, and Site Applications. The ASCEM toolsets will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. During fiscal year 2012, the ASCEM project continued to make significant progress in capabilities development. Capability development occurred in both the Platform and Integrated Toolsets and Multi-Process HPC Simulator areas. The new Platform and Integrated Toolsets capabilities provide the user an interface and the tools necessary for end-to-end model development that includes conceptual model definition, data management for model input, model calibration and uncertainty analysis, and model output processing including visualization. The new HPC Simulator capabilities target increased functionality of process model representations, toolsets for interaction with the Platform, and model confidence testing and verification for

  1. A Primer In Advanced Fatigue Life Prediction Methods

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.

    2000-01-01

    Metal fatigue has plagued structural components for centuries, and it remains a critical durability issue in today's aerospace hardware. This is true despite vastly improved and advanced materials, increased mechanistic understanding, and development of accurate structural analysis and advanced fatigue life prediction tools. Each advance is quickly taken advantage of to produce safer, more reliable more cost effective, and better performing products. In other words, as the envelop is expanded, components are then designed to operate just as close to the newly expanded envelop as they were to the initial one. The problem is perennial. The economic importance of addressing structural durability issues early in the design process is emphasized. Tradeoffs with performance, cost, and legislated restrictions are pointed out. Several aspects of structural durability of advanced systems, advanced materials and advanced fatigue life prediction methods are presented. Specific items include the basic elements of durability analysis, conventional designs, barriers to be overcome for advanced systems, high-temperature life prediction for both creep-fatigue and thermomechanical fatigue, mean stress effects, multiaxial stress-strain states, and cumulative fatigue damage accumulation assessment.

  2. Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method

    NASA Astrophysics Data System (ADS)

    Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han

    2015-12-01

    Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.

  3. Pantograph catenary dynamic optimisation based on advanced multibody and finite element co-simulation tools

    NASA Astrophysics Data System (ADS)

    Massat, Jean-Pierre; Laurent, Christophe; Bianchi, Jean-Philippe; Balmès, Etienne

    2014-05-01

    This paper presents recent developments undertaken by SNCF Innovation & Research Department on numerical modelling of pantograph catenary interaction. It aims at describing an efficient co-simulation process between finite element (FE) and multibody (MB) modelling methods. FE catenary models are coupled with a full flexible MB representation with pneumatic actuation of pantograph. These advanced functionalities allow new kind of numerical analyses such as dynamic improvements based on innovative pneumatic suspensions or assessment of crash risks crossing areas that demonstrate the powerful capabilities of this computing approach.

  4. Simulating marine propellers with vortex particle method

    NASA Astrophysics Data System (ADS)

    Wang, Youjiang; Abdel-Maksoud, Moustafa; Song, Baowei

    2017-01-01

    The vortex particle method is applied to compute the open water characteristics of marine propellers. It is based on the large-eddy simulation technique, and the Smagorinsky-Lilly sub-grid scale model is implemented for the eddy viscosity. The vortex particle method is combined with the boundary element method, in the sense that the body is modelled with boundary elements and the slipstream is modelled with vortex particles. Rotational periodic boundaries are adopted, which leads to a cylindrical sector domain for the slipstream. The particle redistribution scheme and the fast multipole method are modified to consider the rotational periodic boundaries. Open water characteristics of three propellers with different skew angles are calculated with the proposed method. The results are compared with the ones obtained with boundary element method and experiments. It is found that the proposed method predicts the open water characteristics more accurately than the boundary element method, especially for high loading condition and high skew propeller. The influence of the Smagorinsky constant is also studied, which shows the results have a low sensitivity to it.

  5. Development of Advanced Electrochemical Emission Spectroscopy for Monitoring Corrosion in Simulated DOE Liquid Waste

    SciTech Connect

    Macdonald, Digby; Liu, Jun; Liu, Sue; Al-Rifaie, Mohammed; Sikora; Elzbieta

    2000-06-01

    The principal goals of this project are to develop advanced electrochemical emission spectroscopic (EES) methods for monitoring the corrosion of carbon steel in simulated DOE liquid waste and to develop a better understanding of the mechanisms of the corrosion of metals (e.g. iron, nickel, and chromium) and alloys (carbon steel, low alloy steels, stainless steels) in thes e environments. During the first two years of this project, significant advances have been made in developing a better understanding of the corrosion of iron in aqueous solutions as a function of pH, on developing a better understanding of the growth of passive films on metal surfaces, and on developing EES techniques for corrosion monitoring. This report summarizes work on beginning the third year of the 3-year project.

  6. Advanced stress analysis methods applicable to turbine engine structures

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.

    1985-01-01

    Advanced stress analysis methods applicable to turbine engine structures are investigated. Constructions of special elements which containing traction-free circular boundaries are investigated. New versions of mixed variational principle and version of hybrid stress elements are formulated. A method is established for suppression of kinematic deformation modes. semiLoof plate and shell elements are constructed by assumed stress hybrid method. An elastic-plastic analysis is conducted by viscoplasticity theory using the mechanical subelement model.

  7. Advanced surface paneling method for subsonic and supersonic flow

    NASA Technical Reports Server (NTRS)

    Erickson, L. L.; Johnson, F. T.; Ehlers, F. E.

    1976-01-01

    Numerical results illustrating the capabilities of an advanced aerodynamic surface paneling method are presented. The method is applicable to both subsonic and supersonic flow, as represented by linearized potential flow theory. The method is based on linearly varying sources and quadratically varying doublets which are distributed over flat or curved panels. These panels are applied to the true surface geometry of arbitrarily shaped three dimensional aerodynamic configurations.

  8. Angioplasty simulation using ChainMail method

    NASA Astrophysics Data System (ADS)

    Le Fol, Tanguy; Acosta-Tamayo, Oscar; Lucas, Antoine; Haigron, Pascal

    2007-03-01

    Tackling transluminal angioplasty planning, the aim of our work is to bring, in a patient specific way, solutions to clinical problems. This work focuses on realization of simple simulation scenarios taking into account macroscopic behaviors of stenosis. It means simulating geometrical and physical data from the inflation of a balloon while integrating data from tissues analysis and parameters from virtual tool-tissues interactions. In this context, three main behaviors has been identified: soft tissues crush completely under the effect of the balloon, calcified plaques, do not admit any deformation but could move in deformable structures, the blood vessel wall undergoes consequences from compression phenomenon and tries to find its original form. We investigated the use of Chain-Mail which is based on elements linked with the others thanks to geometric constraints. Compared with time consuming methods or low realism ones, Chain-Mail methods provide a good compromise between physical and geometrical approaches. In this study, constraints are defined from pixel density from angio-CT images. The 2D method, proposed in this paper, first initializes the balloon in the blood vessel lumen. Then the balloon inflates and the moving propagation, gives an approximate reaction of tissues. Finally, a minimal energy level is calculated to locally adjust element positions, throughout elastic relaxation stage. Preliminary experimental results obtained on 2D computed tomography (CT) images (100x100 pixels) show that the method is fast enough to handle a great number of linked-element. The simulation is able to verify real-time and realistic interactions, particularly for hard and soft plaques.

  9. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    NASA Astrophysics Data System (ADS)

    Buscombe, D.; Rubin, D. M.

    2012-06-01

    In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  10. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    USGS Publications Warehouse

    Daniel Buscombe,; Rubin, David M.

    2012-01-01

    1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  11. Advanced digital methods for solid propellant burning rate determination

    NASA Astrophysics Data System (ADS)

    Jones, Daniel A.

    The work presented here is a study of a digital method for determining the combustion bomb burning rate of a fuel-rich gas generator propellant sample using the ultrasonic pulse-echo technique. The advanced digital method, which places user defined limits on the search for the ultrasonic echo from the burning surface, is computationally faster than the previous cross correlation method, and is able to analyze data for this class of propellant that the previous cross correlation data reduction method could not. For the conditions investigated, the best fit burning rate law at 800 psi from the ultrasonic technique and advanced cross correlation method is within 3 percent of an independent analysis of the same data, and is within 5 percent of the best fit burning rate law found from parallel research of the same propellant in a motor configuration.

  12. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    SciTech Connect

    E. Blanford; E. Keldrauk; M. Laufer; M. Mieler; J. Wei; B. Stojadinovic; P.F. Peterson

    2010-09-20

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  13. A discrete event method for wave simulation

    SciTech Connect

    Nutaro, James J

    2006-01-01

    This article describes a discrete event interpretation of the finite difference time domain (FDTD) and digital wave guide network (DWN) wave simulation schemes. The discrete event method is formalized using the discrete event system specification (DEVS). The scheme is shown to have errors that are proportional to the resolution of the spatial grid. A numerical example demonstrates the relative efficiency of the scheme with respect to FDTD and DWN schemes. The potential for the discrete event scheme to reduce numerical dispersion and attenuation errors is discussed.

  14. Advancement of DOE's EnergyPlus Building Energy Simulation Payment

    SciTech Connect

    Gu, Lixing; Shirey, Don; Raustad, Richard; Nigusse, Bereket; Sharma, Chandan; Lawrie, Linda; Strand, Rick; Pedersen, Curt; Fisher, Dan; Lee, Edwin; Witte, Mike; Glazer, Jason; Barnaby, Chip

    2011-09-30

    EnergyPlus{sup TM} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D (http://www.energyplus.gov). It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. The 5-year project was managed by the National Energy Technology Laboratory and was divided into 5 budget period between 2006 and 2011. During the project period, 11 versions of EnergyPlus were released. This report summarizes work performed by an EnergyPlus development team led by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC). The team members consist of DHL Consulting, C. O. Pedersen Associates, University of Illinois at Urbana-Champaign, Oklahoma State University, GARD Analytics, Inc., and WrightSoft Corporation. The project tasks involved new feature development, testing and validation, user support and training, and general EnergyPlus support. The team developed 146 new features during the 5-year period to advance the EnergyPlus capabilities. Annual contributions of new features are 7 in budget period 1, 19 in period 2, 36 in period 3, 41 in period 4, and 43 in period 5, respectively. The testing and validation task focused on running test suite and publishing report, developing new IEA test suite cases, testing and validating new source code, addressing change requests, and creating and testing installation package. The user support and training task provided support for users and interface developers, and organized and taught workshops. The general support task involved upgrading StarTeam (team sharing) software and updating existing utility software. The project met the DOE objectives and completed all tasks successfully. Although the EnergyPlus software was enhanced significantly

  15. Advanced Simulation in Undergraduate Pilot Training: Systems Integration. Final Report (February 1972-March 1975).

    ERIC Educational Resources Information Center

    Larson, D. F.; Terry, C.

    The Advanced Simulator for Undergraduate Pilot Training (ASUPT) was designed to investigate the role of simulation in the future Undergraduate Pilot Training (UPT) program. The problem addressed in this report was one of integrating two unlike components into one synchronized system. These two components were the Basic T-37 Simulators and their…

  16. Characterization and Simulation of the Thermoacoustic Instability Behavior of an Advanced, Low Emissions Combustor Prototype

    NASA Technical Reports Server (NTRS)

    DeLaat, John C.; Paxson, Daniel E.

    2008-01-01

    Extensive research is being done toward the development of ultra-low-emissions combustors for aircraft gas turbine engines. However, these combustors have an increased susceptibility to thermoacoustic instabilities. This type of instability was recently observed in an advanced, low emissions combustor prototype installed in a NASA Glenn Research Center test stand. The instability produces pressure oscillations that grow with increasing fuel/air ratio, preventing full power operation. The instability behavior makes the combustor a potentially useful test bed for research into active control methods for combustion instability suppression. The instability behavior was characterized by operating the combustor at various pressures, temperatures, and fuel and air flows representative of operation within an aircraft gas turbine engine. Trends in instability behavior versus operating condition have been identified and documented, and possible explanations for the trends provided. A simulation developed at NASA Glenn captures the observed instability behavior. The physics-based simulation includes the relevant physical features of the combustor and test rig, employs a Sectored 1-D approach, includes simplified reaction equations, and provides time-accurate results. A computationally efficient method is used for area transitions, which decreases run times and allows the simulation to be used for parametric studies, including control method investigations. Simulation results show that the simulation exhibits a self-starting, self-sustained combustion instability and also replicates the experimentally observed instability trends versus operating condition. Future plans are to use the simulation to investigate active control strategies to suppress combustion instabilities and then to experimentally demonstrate active instability suppression with the low emissions combustor prototype, enabling full power, stable operation.

  17. Simulation for Supporting Scale-Up of a Fluidized Bed Reactor for Advanced Water Oxidation

    PubMed Central

    Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    Simulation of fluidized bed reactor (FBR) was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP). The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure) in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe3+ and Fe2+ mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40–90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment. PMID:25309949

  18. Advanced propulsion for LEO-Moon transport. 1: A method for evaluating advanced propulsion performance

    NASA Technical Reports Server (NTRS)

    Stern, Martin O.

    1992-01-01

    This report describes a study to evaluate the benefits of advanced propulsion technologies for transporting materials between low Earth orbit and the Moon. A relatively conventional reference transportation system, and several other systems, each of which includes one advanced technology component, are compared in terms of how well they perform a chosen mission objective. The evaluation method is based on a pairwise life-cycle cost comparison of each of the advanced systems with the reference system. Somewhat novel and economically important features of the procedure are the inclusion not only of mass payback ratios based on Earth launch costs, but also of repair and capital acquisition costs, and of adjustments in the latter to reflect the technological maturity of the advanced technologies. The required input information is developed by panels of experts. The overall scope and approach of the study are presented in the introduction. The bulk of the paper describes the evaluation method; the reference system and an advanced transportation system, including a spinning tether in an eccentric Earth orbit, are used to illustrate it.

  19. METHODS ADVANCEMENT FOR MILK ANALYSIS: THE MAMA STUDY

    EPA Science Inventory

    The Methods Advancement for Milk Analysis (MAMA) study was designed by US EPA and CDC investigators to provide data to support the technological and study design needs of the proposed National Children=s Study (NCS). The NCS is a multi-Agency-sponsored study, authorized under the...

  20. Advanced boundary layer transition measurement methods for flight applications

    NASA Technical Reports Server (NTRS)

    Holmes, B. J.; Croom, C. C.; Gail, P. D.; Manuel, G. S.; Carraway, D. L.

    1986-01-01

    In modern laminar flow flight research, it is important to understand the specific cause(s) of laminar to turbulent boundary-layer transition. Such information is crucial to the exploration of the limits of practical application of laminar flow for drag reduction on aircraft. The transition modes of interest in current flight investigations include the viscous Tollmien-Schlichting instability, the inflectional instability at laminar separation, and the crossflow inflectional instability, as well as others. This paper presents the results to date of research on advanced devices and methods used for the study of laminar boundary-layer transition phenomena in the flight environment. Recent advancements in the development of arrayed hot-film devices and of a new flow visualization method are discussed. Arrayed hot-film devices have been designed to detect the presence of laminar separation, and of crossflow vorticity. The advanced flow visualization method utilizes color changes in liquid-crystal coatings to detect boundary-layer transition at high altitude flight conditions. Flight and wind tunnel data are presented to illustrate the design and operation of these advanced methods. These new research tools provide information on disturbance growth and transition mode which is essential to furthering our understanding of practical design limits for applications of laminar flow technology.

  1. Development of an advanced actuator disk model for Large-Eddy Simulation of wind farms

    NASA Astrophysics Data System (ADS)

    Moens, Maud; Duponcheel, Matthieu; Winckelmans, Gregoire; Chatelain, Philippe

    2015-11-01

    This work aims at improving the fidelity of the wind turbine modelling for Large-Eddy Simulation (LES) of wind farms, in order to accurately predict the loads, the production, and the wake dynamics. In those simulations, the wind turbines are accounted for through actuator disks. i.e. a body-force term acting over the regularised disk swept by the rotor. These forces are computed using the Blade Element theory to estimate the normal and tangential components (based on the local simulated flow and the blade characteristics). The local velocities are modified using the Glauert tip-loss factor in order to account for the finite number of blades; the computation of this correction is here improved thanks to a local estimation of the effective upstream velocity at every point of the disk. These advanced actuator disks are implemented in a 4th order finite difference LES solver and are compared to a classical Blade Element Momentum method and to high fidelity wake simulations performed using a Vortex Particle-Mesh method in uniform and turbulent flows.

  2. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  3. Advances and future directions of research on spectral methods

    NASA Technical Reports Server (NTRS)

    Patera, A. T.

    1986-01-01

    Recent advances in spectral methods are briefly reviewed and characterized with respect to their convergence and computational complexity. Classical finite element and spectral approaches are then compared, and spectral element (or p-type finite element) approximations are introduced. The method is applied to the full Navier-Stokes equations, and examples are given of the application of the technique to several transitional flows. Future directions of research in the field are outlined.

  4. Current Advances in the Computational Simulation of the Formation of Low-Mass Stars

    SciTech Connect

    Klein, R I; Inutsuka, S; Padoan, P; Tomisaka, K

    2005-10-24

    Developing a theory of low-mass star formation ({approx} 0.1 to 3 M{sub {circle_dot}}) remains one of the most elusive and important goals of theoretical astrophysics. The star-formation process is the outcome of the complex dynamics of interstellar gas involving non-linear interactions of turbulence, gravity, magnetic field and radiation. The evolution of protostellar condensations, from the moment they are assembled by turbulent flows to the time they reach stellar densities, spans an enormous range of scales, resulting in a major computational challenge for simulations. Since the previous Protostars and Planets conference, dramatic advances in the development of new numerical algorithmic techniques have been successfully implemented on large scale parallel supercomputers. Among such techniques, Adaptive Mesh Refinement and Smooth Particle Hydrodynamics have provided frameworks to simulate the process of low-mass star formation with a very large dynamic range. It is now feasible to explore the turbulent fragmentation of molecular clouds and the gravitational collapse of cores into stars self-consistently within the same calculation. The increased sophistication of these powerful methods comes with substantial caveats associated with the use of the techniques and the interpretation of the numerical results. In this review, we examine what has been accomplished in the field and present a critique of both numerical methods and scientific results. We stress that computational simulations should obey the available observational constraints and demonstrate numerical convergence. Failing this, results of large scale simulations do not advance our understanding of low-mass star formation.

  5. Simulations of Failure via Three-Dimensional Cracking in Fuel Cladding for Advanced Nuclear Fuels

    SciTech Connect

    Lu, Hongbing; Bukkapatnam, Satish; Harimkar, Sandip; Singh, Raman; Bardenhagen, Scott

    2014-01-09

    Enhancing performance of fuel cladding and duct alloys is a key means of increasing fuel burnup. This project will address the failure of fuel cladding via three-dimensional cracking models. Researchers will develop a simulation code for the failure of the fuel cladding and validate the code through experiments. The objective is to develop an algorithm to determine the failure of fuel cladding in the form of three-dimensional cracking due to prolonged exposure under varying conditions of pressure, temperature, chemical environment, and irradiation. This project encompasses the following tasks: 1. Simulate 3D crack initiation and growth under instantaneous and/or fatigue loads using a new variant of the material point method (MPM); 2. Simulate debonding of the materials in the crack path using cohesive elements, considering normal and shear traction separation laws; 3. Determine the crack propagation path, considering damage of the materials incorporated in the cohesive elements to allow the energy release rate to be minimized; 4. Simulate the three-dimensional fatigue crack growth as a function of loading histories; 5. Verify the simulation code by comparing results to theoretical and numerical studies available in the literature; 6. Conduct experiments to observe the crack path and surface profile in unused fuel cladding and validate against simulation results; and 7. Expand the adaptive mesh refinement infrastructure parallel processing environment to allow adaptive mesh refinement at the 3D crack fronts and adaptive mesh merging in the wake of cracks. Fuel cladding is made of materials such as stainless steels and ferritic steels with added alloying elements, which increase stability and durability under irradiation. As fuel cladding is subjected to water, chemicals, fission gas, pressure, high temperatures, and irradiation while in service, understanding performance is essential. In the fast fuel used in advanced burner reactors, simulations of the nuclear

  6. Advances in nucleic acid-based detection methods.

    PubMed Central

    Wolcott, M J

    1992-01-01

    Laboratory techniques based on nucleic acid methods have increased in popularity over the last decade with clinical microbiologists and other laboratory scientists who are concerned with the diagnosis of infectious agents. This increase in popularity is a result primarily of advances made in nucleic acid amplification and detection techniques. Polymerase chain reaction, the original nucleic acid amplification technique, changed the way many people viewed and used nucleic acid techniques in clinical settings. After the potential of polymerase chain reaction became apparent, other methods of nucleic acid amplification and detection were developed. These alternative nucleic acid amplification methods may become serious contenders for application to routine laboratory analyses. This review presents some background information on nucleic acid analyses that might be used in clinical and anatomical laboratories and describes some recent advances in the amplification and detection of nucleic acids. PMID:1423216

  7. Etch Profile Simulation Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.

  8. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  9. Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres

    NASA Astrophysics Data System (ADS)

    Liu, Quanhua; Weng, Fuzhong

    2006-12-01

    The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.

  10. General advancing front packing algorithm for the discrete element method

    NASA Astrophysics Data System (ADS)

    Morfa, Carlos A. Recarey; Pérez Morales, Irvin Pablo; de Farias, Márcio Muniz; de Navarra, Eugenio Oñate Ibañez; Valera, Roberto Roselló; Casañas, Harold Díaz-Guzmán

    2016-11-01

    A generic formulation of a new method for packing particles is presented. It is based on a constructive advancing front method, and uses Monte Carlo techniques for the generation of particle dimensions. The method can be used to obtain virtual dense packings of particles with several geometrical shapes. It employs continuous, discrete, and empirical statistical distributions in order to generate the dimensions of particles. The packing algorithm is very flexible and allows alternatives for: 1—the direction of the advancing front (inwards or outwards), 2—the selection of the local advancing front, 3—the method for placing a mobile particle in contact with others, and 4—the overlap checks. The algorithm also allows obtaining highly porous media when it is slightly modified. The use of the algorithm to generate real particle packings from grain size distribution curves, in order to carry out engineering applications, is illustrated. Finally, basic applications of the algorithm, which prove its effectiveness in the generation of a large number of particles, are carried out.

  11. Advanced stress analysis methods applicable to turbine engine structures

    NASA Technical Reports Server (NTRS)

    Pian, Theodore H. H.

    1991-01-01

    The following tasks on the study of advanced stress analysis methods applicable to turbine engine structures are described: (1) constructions of special elements which contain traction-free circular boundaries; (2) formulation of new version of mixed variational principles and new version of hybrid stress elements; (3) establishment of methods for suppression of kinematic deformation modes; (4) construction of semiLoof plate and shell elements by assumed stress hybrid method; and (5) elastic-plastic analysis by viscoplasticity theory using the mechanical subelement model.

  12. Development of Kinetic Mechanisms for Next-Generation Fuels and CFD Simulation of Advanced Combustion Engines

    SciTech Connect

    Pitz, William J.; McNenly, Matt J.; Whitesides, Russell; Mehl, Marco; Killingsworth, Nick J.; Westbrook, Charles K.

    2015-12-17

    Predictive chemical kinetic models are needed to represent next-generation fuel components and their mixtures with conventional gasoline and diesel fuels. These kinetic models will allow the prediction of the effect of alternative fuel blends in CFD simulations of advanced spark-ignition and compression-ignition engines. Enabled by kinetic models, CFD simulations can be used to optimize fuel formulations for advanced combustion engines so that maximum engine efficiency, fossil fuel displacement goals, and low pollutant emission goals can be achieved.

  13. Advanced beam-dynamics simulation tools for RIA.

    SciTech Connect

    Garnett, R. W.; Wangler, T. P.; Billen, J. H.; Qiang, J.; Ryne, R.; Crandall, K. R.; Ostroumov, P.; York, R.; Zhao, Q.; Physics; LANL; LBNL; Tech Source; Michigan State Univ.

    2005-01-01

    We are developing multi-particle beam-dynamics simulation codes for RIA driver-linac simulations extending from the low-energy beam transport (LEBT) line to the end of the linac. These codes run on the NERSC parallel supercomputing platforms at LBNL, which allow us to run simulations with large numbers of macroparticles. The codes have the physics capabilities needed for RIA, including transport and acceleration of multiple-charge-state beams, beam-line elements such as high-voltage platforms within the linac, interdigital accelerating structures, charge-stripper foils, and capabilities for handling the effects of machine errors and other off-normal conditions. This year will mark the end of our project. In this paper we present the status of the work, describe some recent additions to the codes, and show some preliminary simulation results.

  14. Strategic Plan for Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Kimberlyn C. Mousseau

    2011-10-01

    The Nuclear Energy Computational Fluid Dynamics Advanced Modeling and Simulation (NE-CAMS) system is being developed at the Idaho National Laboratory (INL) in collaboration with Bettis Laboratory, Sandia National Laboratory (SNL), Argonne National Laboratory (ANL), Utah State University (USU), and other interested parties with the objective of developing and implementing a comprehensive and readily accessible data and information management system for computational fluid dynamics (CFD) verification and validation (V&V) in support of nuclear energy systems design and safety analysis. The two key objectives of the NE-CAMS effort are to identify, collect, assess, store and maintain high resolution and high quality experimental data and related expert knowledge (metadata) for use in CFD V&V assessments specific to the nuclear energy field and to establish a working relationship with the U.S. Nuclear Regulatory Commission (NRC) to develop a CFD V&V database, including benchmark cases, that addresses and supports the associated NRC regulations and policies on the use of CFD analysis. In particular, the NE-CAMS system will support the Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program, which aims to develop and deploy advanced modeling and simulation methods and computational tools for reliable numerical simulation of nuclear reactor systems for design and safety analysis. Primary NE-CAMS Elements There are four primary elements of the NE-CAMS knowledge base designed to support computer modeling and simulation in the nuclear energy arena as listed below. Element 1. The database will contain experimental data that can be used for CFD validation that is relevant to nuclear reactor and plant processes, particularly those important to the nuclear industry and the NRC. Element 2. Qualification standards for data evaluation and classification will be incorporated and applied such that validation data sets will result in well

  15. Recent advances in computational methodology for simulation of mechanical circulatory assist devices

    PubMed Central

    Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek

    2014-01-01

    Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607

  16. Advanced three-dimensional dynamic analysis by boundary element methods

    NASA Technical Reports Server (NTRS)

    Banerjee, P. K.; Ahma, S.

    1985-01-01

    Advanced formulations of boundary element method for periodic, transient transform domain and transient time domain solution of three-dimensional solids have been implemented using a family of isoparametric boundary elements. The necessary numerical integration techniques as well as the various solution algorithms are described. The developed analysis has been incorporated in a fully general purpose computer program BEST3D which can handle up to 10 subregions. A number of numerical examples are presented to demonstrate the accuracy of the dynamic analyses.

  17. Advanced Simulator Development for Power Flow and Sources

    DTIC Science & Technology

    2006-02-01

    specifications for sub-system (primary energy store, water pulse compression/transmission lines, vacuum power flow) design. Using our experience with pulsed ...also enable beneficial upgrades to existing simulator facilities. 14. SUBJECT TERMS 15. NUMBER OF PAGES 109 Marx Generator Plasma Radiation Source Pulsed ...minimize cost for large dose X area products. Based upon simple scaling from existing pulsed power simulators , we assumed that we could achieve yields

  18. [Research advances in soil nitrogen cycling models and their simulation].

    PubMed

    Tang, Guoyong; Huang, Daoyou; Tong, Chengli; Zhang, Wenju; Wu, Jinshui

    2005-11-01

    Nitrogen is one of the necessary nutrients for plant, and also a primary element leading to environmental pollution. Many researches have been concerned about the contribution of agricultural activities to environmental pollution by nitrogenous compounds, and the focus is how to simulate soil nitrogen cycling processes correctly. In this paper, the primary soil nitrogen cycling processes were reviewed in brief, with 13 cycling models and 6 simulated cycling processes introduced, and the parameterization of models discussed.

  19. Comparison of Advanced Distillation Control Methods, Final Technical Report

    SciTech Connect

    Dr. James B. Riggs

    2000-11-30

    Detailed dynamic simulations of three industrial distillation columns (a propylene/propane splitter, a xylene/toluene column, and a depropanizer) have been used to evaluate configuration selections for single-ended and dual-composition control, as well as to compare conventional and advanced control approaches. In addition, a simulator of a main fractionator was used to compare the control performance of conventional and advanced control. For each case considered, the controllers were tuned by using setpoint changes and tested using feed composition upsets. Proportional Integral (PI) control performance was used to evaluate the configuration selection problem. For single ended control, the energy balance configuration was found to yield the best performance. For dual composition control, nine configurations were considered. It was determined that the use of dynamic simulations is required in order to identify the optimum configuration from among the nine possible choices. The optimum configurations were used to evaluate the relative control performance of conventional PI controllers, MPC (Model Predictive Control), PMBC (Process Model-Based Control), and ANN (Artificial Neural Networks) control. It was determined that MPC works best when one product is much more important than the other, while PI was superior when both products were equally important. PMBC and ANN were not found to offer significant advantages over PI and MPC. MPC was found to outperform conventional PI control for the main fractionator. MPC was applied to three industrial columns: one at Phillips Petroleum and two at Union Carbide. In each case, MPC was found to significantly outperform PI controls. The major advantage of the MPC controller is its ability to effectively handle a complex set of constraints and control objectives.

  20. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-01

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  1. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    SciTech Connect

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-21

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  2. Advanced boundary element methods in aeroacoustics and elastodynamics

    NASA Astrophysics Data System (ADS)

    Lee, Li

    In the first part of this dissertation, advanced boundary element methods (BEM) are developed for acoustic radiation in the presence of subsonic flows. A direct boundary integral formulation is first introduced for acoustic radiation in a uniform flow. This new formulation uses the Green's function derived from the adjoint operator of the governing differential equation. Therefore, it requires no coordinate transformation. This direct BEM formulation is then extended to acoustic radiation in a nonuniform-flow field. All the terms due to the nonuniform-flow effect are taken to the right-hand side and treated as source terms. The source terms result in a domain integral in the standard boundary integral formulation. The dual reciprocity method is then used to convert the domain integral into a number of boundary integrals. The second part of this dissertation is devoted to the development of advanced BEM algorithms to overcome the multi-frequency and nonuniqueness difficulties in steady-state elastodynamics. For the multi-frequency difficulty, two different interpolation schemes, borrowed from recent developments in acoustics, are first extended to elastodynamics to accelerate the process of matrix re-formation. Then, a hybrid scheme that retains only the merits of the two different interpolation schemes is suggested. To overcome the nonuniqueness difficulty, an enhanced CHIEF (Combined Helmholtz Integral Equation Formulation) method using a linear combination of the displacement and the traction boundary integral equations on the surface of a small interior volume is proposed. Numerical examples are given to demonstrate all the advanced BEM formulations.

  3. A low-cost RK time advancing strategy for energy-preserving turbulent simulations

    NASA Astrophysics Data System (ADS)

    Capuano, Francesco; Coppola, Gennaro; de Luca, Luigi; Balarac, Guillaume

    2014-11-01

    Energy-conserving numerical methods are widely employed in direct and large eddy simulation of turbulent flows. Semi-discrete conservation of energy is usually obtained by adopting the so-called skew-symmetric splitting of the non-linear term, defined as a suitable average of the divergence and advective forms. Although generally allowing global conservation of kinetic energy by convection, it has the drawback of being roughly twice as expensive as standard divergence or advective forms alone. A novel time-advancement strategy that retains the conservation properties of skew-symmetric-based schemes at a reduced computational cost has been developed in the framework of explicit Runge-Kutta schemes. It is found that optimal energy-conservation can be achieved by properly constructed Runge-Kutta methods in which only divergence and advective forms for the convective term are adopted. The new schemes can be considerably faster than skew-symmetric-based techniques. A general framework for the construction of optimized Runge-Kutta coefficients is developed, which has proven to be able to produce new methods with a specified order of accuracy on both solution and energy. The effectiveness of the method is demonstrated by numerical simulation of homogeneous isotropic turbulence.

  4. Advanced techniques and painless procedures for nonlinear contact analysis and forming simulation via implicit FEM

    NASA Astrophysics Data System (ADS)

    Zhuang, Shoubing

    2013-05-01

    Nonlinear contact analysis including forming simulation via finite element methods has a crucial and practical application in many engineering fields. However, because of high nonlinearity, nonlinear contact analysis still remains as an extremely challenging obstacle for many industrial applications. The implicit finite element scheme is generally more accurate than the explicit finite element scheme, but it has a known challenge of convergence because of complex geometries, large relative motion and rapid contact state change. It might be thought as a very painful process to diagnose the convergence issue of nonlinear contact. Most complicated contact models have a great many contact surfaces, and it is hard work to well define the contact pairs using the common contact definition methods, which either result in hundreds of contact pairs or are time-consuming. This paper presents the advanced techniques of nonlinear contact analysis and forming simulation via the implicit finite element scheme and the penalty method. The calculation of the default automatic contact stiffness is addressed. Furthermore, this paper presents the idea of selection groups to help easily and efficiently define contact pairs for complicated contact analysis, and the corresponding implementation and usage are discussed. Lastly, typical nonlinear contact models and forming models with nonlinear material models are shown in the paper to demonstrate the key presented method and technologies.

  5. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  6. Advanced SAR simulator with multi-beam interferometric capabilities

    NASA Astrophysics Data System (ADS)

    Reppucci, Antonio; Márquez, José; Cazcarra, Victor; Ruffini, Giulio

    2014-10-01

    State of the art simulations are of great interest when designing a new instrument, studying the imaging mechanisms due to a given scenario or for inversion algorithm design as they allow to analyze and understand the effects of different instrument configurations and targets compositions. In the framework of the studies about a new instruments devoted to the estimation of the ocean surface movements using Synthetic Aperture Radar along-track interferometry (SAR-ATI) an End-to-End simulator has been developed. The simulator, built in a high modular way to allow easy integration of different processing-features, deals with all the basic operations involved in an end to end scenario. This includes the computation of the position and velocity of the platform (airborne/spaceborne) and the geometric parameters defining the SAR scene, the surface definition, the backscattering computation, the atmospheric attenuation, the instrument configuration, and the simulation of the transmission/reception chains and the raw data. In addition, the simulator provides a inSAR processing suit and a sea surface movement retrieval module. Up to four beams (each one composed by a monostatic and a bistatic channel) can be activated. Each channel provides raw data and SLC images with the possibility of choosing between Strip-map and Scansar modes. Moreover, the software offers the possibility of radiometric sensitivity analysis and error analysis due atmospheric disturbances, instrument-noise, interferogram phase-noise, platform velocity and attitude variations. In this paper, the architecture and the capabilities of this simulator will be presented. Meaningful simulation examples will be shown.

  7. Advances in Constitutive and Failure Models for Sheet Forming Simulation

    NASA Astrophysics Data System (ADS)

    Yoon, Jeong Whan; Stoughton, Thomas B.

    2016-08-01

    Non-Associated Flow Rule (Non-AFR) can be used as a convenient way to account for anisotropic material response in metal deformation processes, making it possible for example, to eliminate the problem of the anomalous yielding in equibiaxial tension that is mistakenly attributed to limitations of the quadratic yield function, but may instead be attributed to the Associated Flow Rule (AFR). Seeing as in Non-AFR based models two separate functions can be adopted for yield and plastic potential, there is no constraint to which models are used to describe each of them. In this work, the flexible combination of two different yield criteria as yield function and plastic potential under Non-AFR is proposed and evaluated. FE simulations were carried so as to verify the accuracy of the material directionalities predicted using these constitutive material models. The stability conditions for non-associated flow connected with the prediction of yield point elongation are also reviewed. Anisotropic distortion hardening is further incorporated under non-associated flow. It has been found that anisotropic hardening makes the noticeable improvements for both earing and spring-back predictions. This presentation is followed by a discussion of the topic of the forming limit & necking, the evidence in favor of stress analysis, and the motivation for the development of a new type of forming limit diagram based on the polar effective plastic strain (PEPS) diagram. In order to connect necking to fracture in metals, the stress-based necking limit is combined with a stress- based fracture criterion in the principal stress, which provides an efficient method for the analysis of necking and fracture limits. The concept for the PEPS diagram is further developed to cover the path-independent PEPS fracture which is compatible with the stress-based fracture approach. Thus this fracture criterion can be utilized to describe the post-necking behavior and to cover nonlinear strain-path. Fracture

  8. Numerical simulation of turbomachinery flows with advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Kunz, R.; Luo, J.; Fan, S.

    1992-01-01

    A three dimensional full Navier-Stokes (FNS) code is used to simulate complex turbomachinery flows. The code incorporates an explicit multistep scheme and solves a conservative form of the density averaged continuity, momentum, and energy equations. A compressible low Reynolds number form of the k-epsilon turbulence model, and a q-omega model and an algebraic Reynolds stress model have been incorporated in a fully coupled manner to approximate Reynolds stresses. The code is used to predict the viscous flow field in a backswept transonic centrifugal compressor for which laser two focus data is available. The code is also used to simulate the tip clearance flow in a cascade. The code has been extended to include unsteady Euler solutions for predicting the unsteady flow through a cascade due to incoming wakes, simulating rotor-stator interactions.

  9. Advances in Discrete-Event Simulation for MSL Command Validation

    NASA Technical Reports Server (NTRS)

    Patrikalakis, Alexander; O'Reilly, Taifun

    2013-01-01

    In the last five years, the discrete event simulator, SEQuence GENerator (SEQGEN), developed at the Jet Propulsion Laboratory to plan deep-space missions, has greatly increased uplink operations capacity to deal with increasingly complicated missions. In this paper, we describe how the Mars Science Laboratory (MSL) project makes full use of an interpreted environment to simulate change in more than fifty thousand flight software parameters and conditional command sequences to predict the result of executing a conditional branch in a command sequence, and enable the ability to warn users whenever one or more simulated spacecraft states change in an unexpected manner. Using these new SEQGEN features, operators plan more activities in one sol than ever before.

  10. Advances in simulation study on organic small molecular solar cells

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Guo, Wenge; Li, Ming; Ma, Wentao; Meng, Sen

    2015-02-01

    Recently, more focuses have been put on organic semiconductors because of its advantages, such as its flexibility, ease of fabrication and potential low cost, etc. The reasons we pay highlight on small molecular photovoltaic material are its ease of purification, easy to adjust and determine structure, easy to assemble range units and get high carrier mobility, etc. Simulation study on organic small molecular solar cells before the experiment can help the researchers find relationship between the efficiency and structure parameters, properties of material, estimate the performance of the device, bring the optimization of guidance. Also, the applicability of the model used in simulation can be discussed by comparison with experimental data. This paper summaries principle, structure, progress of numerical simulation on organic small molecular solar cells.

  11. Design and simulation of advanced fault tolerant flight control schemes

    NASA Astrophysics Data System (ADS)

    Gururajan, Srikanth

    This research effort describes the design and simulation of a distributed Neural Network (NN) based fault tolerant flight control scheme and the interface of the scheme within a simulation/visualization environment. The goal of the fault tolerant flight control scheme is to recover an aircraft from failures to its sensors or actuators. A commercially available simulation package, Aviator Visual Design Simulator (AVDS), was used for the purpose of simulation and visualization of the aircraft dynamics and the performance of the control schemes. For the purpose of the sensor failure detection, identification and accommodation (SFDIA) task, it is assumed that the pitch, roll and yaw rate gyros onboard are without physical redundancy. The task is accomplished through the use of a Main Neural Network (MNN) and a set of three De-Centralized Neural Networks (DNNs), providing analytical redundancy for the pitch, roll and yaw gyros. The purpose of the MNN is to detect a sensor failure while the purpose of the DNNs is to identify the failed sensor and then to provide failure accommodation. The actuator failure detection, identification and accommodation (AFDIA) scheme also features the MNN, for detection of actuator failures, along with three Neural Network Controllers (NNCs) for providing the compensating control surface deflections to neutralize the failure induced pitching, rolling and yawing moments. All NNs continue to train on-line, in addition to an offline trained baseline network structure, using the Extended Back-Propagation Algorithm (EBPA), with the flight data provided by the AVDS simulation package. The above mentioned adaptive flight control schemes have been traditionally implemented sequentially on a single computer. This research addresses the implementation of these fault tolerant flight control schemes on parallel and distributed computer architectures, using Berkeley Software Distribution (BSD) sockets and Message Passing Interface (MPI) for inter

  12. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    SciTech Connect

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  13. Advances in Statistical Methods for Substance Abuse Prevention Research

    PubMed Central

    MacKinnon, David P.; Lockwood, Chondra M.

    2010-01-01

    The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467

  14. Parallel Performance Optimization of the Direct Simulation Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas

    2009-11-01

    Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.

  15. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  16. Advanced Simulation and Computing Co-Design Strategy

    SciTech Connect

    Ang, James A.; Hoang, Thuc T.; Kelly, Suzanne M.; McPherson, Allen; Neely, Rob

    2015-11-01

    This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

  17. Advanced Computation Dynamics Simulation of Protective Structures Research

    DTIC Science & Technology

    2013-02-01

    between the steel and CMU, grout, a flowable concrete mixture, is placed into the reinforced cells. If grout is placed into every cell (including...multi-wythe walls that were fully grouted and had a brick veneer filled with a foam insulated cavity. He simulated the grout and CMU with a single

  18. Technical advances in molecular simulation since the 1980s.

    PubMed

    Field, Martin J

    2015-09-15

    This review describes how the theory and practice of molecular simulation have evolved since the beginning of the 1980s when the author started his career in this field. The account is of necessity brief and subjective and highlights the changes that the author considers have had significant impact on his research and mode of working.

  19. Advanced Shuttle Simulation Turbulence Tapes (SSTT) users guide

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1981-01-01

    A nonrecursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity and also for simulation of instantaneous gust gradients. Based on this model, the time series for both gusts and gust gradients was generated and stored on a series of magnetic tapes which are entitled shuttle simulation turbulence tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. An appropriate description of the characteristics of the simulated turbulence stored on the tapes, as well as instructions regarding their proper use are provided. The characteristics of the turbulence series, including the spectral shape, cutoff frequencies, and variation of turbulence parameters with altitude, are discussed. Information regarding the tapes and their use is presented. Appendices provide results of spectral and statistical analyses of the SSTT and examples of how the SSTT should be used.

  20. Changing the Paradigm: Simulation, a Method of First Resort

    DTIC Science & Technology

    2011-09-01

    PARADIGM: SIMULATION, A METHOD OF FIRST RESORT by Ben L. Anderson September 2011 Thesis Advisor: Thomas W. Lucas Second Reader: Devaushi...COVERED Master’s Thesis 4. TITLE AND SUBTITLE Changing the Paradigm: Simulation, a Method of First Resort 5. FUNDING NUMBERS 6. AUTHOR(S...is over 1,000,000,000 times more powerful than the first simulation pioneers had sixty years ago, yet the concept that simulation is a “ method of

  1. Interim Service ISDN Satellite (ISIS) simulator development for advanced satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.

    1992-01-01

    The simulation development associated with the network models of both the Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures is documented. The ISIS Network Model design represents satellite systems like the Advanced Communications Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) Program, moves all control and switching functions on-board the next generation ISDN communications satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete event simulation experiments will be performed with these models using various traffic scenarios, design parameters, and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  2. Coarse-grained computer simulation of dynamics in thylakoid membranes: methods and opportunities

    PubMed Central

    Schneider, Anna R.; Geissler, Phillip L.

    2013-01-01

    Coarse-grained simulation is a powerful and well-established suite of computational methods for studying structure and dynamics in nanoscale biophysical systems. As our understanding of the plant photosynthetic apparatus has become increasingly nuanced, opportunities have arisen for coarse-grained simulation to complement experiment by testing hypotheses and making predictions. Here, we give an overview of best practices in coarse-grained simulation, with a focus on techniques and results that are applicable to the plant thylakoid membrane–protein system. We also discuss current research topics for which coarse-grained simulation has the potential to play a key role in advancing the field. PMID:24478781

  3. An educational training simulator for advanced perfusion techniques using a high-fidelity virtual patient model.

    PubMed

    Tokaji, Megumi; Ninomiya, Shinji; Kurosaki, Tatsuya; Orihashi, Kazumasa; Sueda, Taijiro

    2012-12-01

    The operation of cardiopulmonary bypass procedure requires an advanced skill in both physiological and mechanical knowledge. We developed a virtual patient simulator system using a numerical cardiovascular regulation model to manage perfusion crisis. This article evaluates the ability of the new simulator to prevent perfusion crisis. It combined short-term baroreflex regulation of venous capacity, vascular resistance, heart rate, time-varying elastance of the heart, and plasma-refilling with a simple lumped parameter model of the cardiovascular system. The combination of parameters related to baroreflex regulation was calculated using clinical hemodynamic data. We examined the effect of differences in autonomous-nerve control parameter settings on changes in blood volume and hemodynamic parameters and determined the influence of the model on operation of the control arterial line flow and blood volume during the initiation and weaning from cardiopulmonary bypass. Typical blood pressure (BP) changes (hypertension, stable, and hypotension) were reproducible using a combination of four control parameters that can be estimated from changes in patient physiology, BP, and blood volume. This simulation model is a useful educational tool to learn the recognition and management skills of extracorporeal circulation. Identification method for control parameter can be applied for diagnosis of heart failure.

  4. Simulating data processing for an Advanced Ion Mobility Mass Spectrometer

    SciTech Connect

    Chavarría-Miranda, Daniel; Clowers, Brian H.; Anderson, Gordon A.; Belov, Mikhail E.

    2007-11-03

    We have designed and implemented a Cray XD-1-based sim- ulation of data capture and signal processing for an ad- vanced Ion Mobility mass spectrometer (Hadamard trans- form Ion Mobility). Our simulation is a hybrid application that uses both an FPGA component and a CPU-based soft- ware component to simulate Ion Mobility mass spectrome- try data processing. The FPGA component includes data capture and accumulation, as well as a more sophisticated deconvolution algorithm based on a PNNL-developed en- hancement to standard Hadamard transform Ion Mobility spectrometry. The software portion is in charge of stream- ing data to the FPGA and collecting results. We expect the computational and memory addressing logic of the FPGA component to be portable to an instrument-attached FPGA board that can be interfaced with a Hadamard transform Ion Mobility mass spectrometer.

  5. Virtual charge state separator as an advanced tool coupling measurements and simulations

    NASA Astrophysics Data System (ADS)

    Yaramyshev, S.; Vormann, H.; Adonin, A.; Barth, W.; Dahl, L.; Gerhard, P.; Groening, L.; Hollinger, R.; Maier, M.; Mickat, S.; Orzhekhovskaya, A.

    2015-05-01

    A new low energy beam transport for a multicharge uranium beam will be built at the GSI High Current Injector (HSI). All uranium charge states coming from the new ion source will be injected into GSI heavy ion high current HSI Radio Frequency Quadrupole (RFQ), but only the design ions U4 + will be accelerated to the final RFQ energy. A detailed knowledge about injected beam current and emittance for pure design U4 + ions is necessary for a proper beam line design commissioning and operation, while measurements are possible only for a full beam including all charge states. Detailed measurements of the beam current and emittance are performed behind the first quadrupole triplet of the beam line. A dedicated algorithm, based on a combination of measurements and the results of advanced beam dynamics simulations, provides for an extraction of beam current and emittance values for only the U4 + component of the beam. The proposed methods and obtained results are presented.

  6. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  7. Advanced Simulation in Undergraduate Pilot Training: Motion System Development

    DTIC Science & Technology

    1975-10-01

    Resources Laboratory * a~ October 1975 DISTRIBUTED BY: National Technical Infolmation Service U. S. DEPARTMENT OF COMMERCE 329055 AFHRL-TR-75.59(11) AIR...1911 - March 1975 0 A plloved (or publ( rele.Le; ditribution unlii h¢uted. E S LABORATORY NATIONAL TECHNICAL I INFORMATION SERVICEIJS D-pvt-f Of ,CU...Force IHuman Resources Laboratory (AFSC), Wright-Patterson Air Force Base. Ohio 45433. Mr. Don R. Gur.i Simulation Techniques Branch. was tile contract

  8. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ RE; CANDY J; HINTON FL; ESTRADA-MILA C; KINSEY JE

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or a globally with physical profile variation. Rohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, plasma pinches and impurity flow, and simulations at fixed flow rather than fixed gradient are illustrated and discussed.

  9. Advanced reactor physics methods for heterogeneous reactor cores

    NASA Astrophysics Data System (ADS)

    Thompson, Steven A.

    To maintain the economic viability of nuclear power the industry has begun to emphasize maximizing the efficiency and output of existing nuclear power plants by using longer fuel cycles, stretch power uprates, shorter outage lengths, mixed-oxide (MOX) fuel and more aggressive operating strategies. In order to accommodate these changes, while still satisfying the peaking factor and power envelope requirements necessary to maintain safe operation, more complexity in commercial core designs have been implemented, such as an increase in the number of sub-batches and an increase in the use of both discrete and integral burnable poisons. A consequence of the increased complexity of core designs, as well as the use of MOX fuel, is an increase in the neutronic heterogeneity of the core. Such heterogeneous cores introduce challenges for the current methods that are used for reactor analysis. New methods must be developed to address these deficiencies while still maintaining the computational efficiency of existing reactor analysis methods. In this thesis, advanced core design methodologies are developed to be able to adequately analyze the highly heterogeneous core designs which are currently in use in commercial power reactors. These methodological improvements are being pursued with the goal of not sacrificing the computational efficiency which core designers require. More specifically, the PSU nodal code NEM is being updated to include an SP3 solution option, an advanced transverse leakage option, and a semi-analytical NEM solution option.

  10. Left ventricular flow analysis: recent advances in numerical methods and applications in cardiac ultrasound.

    PubMed

    Borazjani, Iman; Westerdale, John; McMahon, Eileen M; Rajaraman, Prathish K; Heys, Jeffrey J; Belohlavek, Marek

    2013-01-01

    The left ventricle (LV) pumps oxygenated blood from the lungs to the rest of the body through systemic circulation. The efficiency of such a pumping function is dependent on blood flow within the LV chamber. It is therefore crucial to accurately characterize LV hemodynamics. Improved understanding of LV hemodynamics is expected to provide important clinical diagnostic and prognostic information. We review the recent advances in numerical and experimental methods for characterizing LV flows and focus on analysis of intraventricular flow fields by echocardiographic particle image velocimetry (echo-PIV), due to its potential for broad and practical utility. Future research directions to advance patient-specific LV simulations include development of methods capable of resolving heart valves, higher temporal resolution, automated generation of three-dimensional (3D) geometry, and incorporating actual flow measurements into the numerical solution of the 3D cardiovascular fluid dynamics.

  11. Left Ventricular Flow Analysis: Recent Advances in Numerical Methods and Applications in Cardiac Ultrasound

    PubMed Central

    Borazjani, Iman; Westerdale, John; McMahon, Eileen M.; Rajaraman, Prathish K.; Heys, Jeffrey J.

    2013-01-01

    The left ventricle (LV) pumps oxygenated blood from the lungs to the rest of the body through systemic circulation. The efficiency of such a pumping function is dependent on blood flow within the LV chamber. It is therefore crucial to accurately characterize LV hemodynamics. Improved understanding of LV hemodynamics is expected to provide important clinical diagnostic and prognostic information. We review the recent advances in numerical and experimental methods for characterizing LV flows and focus on analysis of intraventricular flow fields by echocardiographic particle image velocimetry (echo-PIV), due to its potential for broad and practical utility. Future research directions to advance patient-specific LV simulations include development of methods capable of resolving heart valves, higher temporal resolution, automated generation of three-dimensional (3D) geometry, and incorporating actual flow measurements into the numerical solution of the 3D cardiovascular fluid dynamics. PMID:23690874

  12. Advancements in frequency-domain methods for rotorcraft system identification

    NASA Technical Reports Server (NTRS)

    Tischler, Mark B.

    1989-01-01

    A new method for frequency-domain identification of rotorcraft dynamics is presented. Nonparametric frequency-response identification and parametric transfer-function modeling methods are extended to allow the extraction of state-space (stability and control derivative) representations. An interactive computer program DERIVID is described for the iterative solution of the multi-input/multi-output frequency-response matching approach used in the identification. Theoretical accuracy methods are used to determine the appropriate model structure and degree-of-confidence in the identified parameters. The method is applied to XV-15 tilt-rotor aircraft data in hover. Bare-airframe stability and control derivatives for the lateral/directional dynamics are shown to compare favorably with models previously obtained using time-domain identification methods and the XV-15 simulation program.

  13. Advancements in frequency-domain methods for rotorcraft system identification

    NASA Technical Reports Server (NTRS)

    Tischler, Mark B.

    1988-01-01

    A new method for frequency-domain identification of rotorcraft dynamics is presented. Nonparametric frequency-response identification and parametric tranfer-function modeling methods are extended to allow the extraction of state-space (stability and control derivative) representations. An interactive computer program DERIVID is described for the iterative solution of the multi-input/multi-output frequency-response matching approach used in the identification. Theoretical accuracy methods are used to determine the appropriate model structure and degree-of-confidence in the identified parameters. The method is applied to XV-15 tilt-rotor aircraft data in hover. Bare-airframe stability and control derivatives for the lateral/directional dynamics are shown to compare favorably with models previously obtained using time-domain identification methods and the XV-15 simulation program.

  14. Advanced Simulation Technology to Design Etching Process on CMOS Devices

    NASA Astrophysics Data System (ADS)

    Kuboi, Nobuyuki

    2015-09-01

    Prediction and control of plasma-induced damage is needed to mass-produce high performance CMOS devices. In particular, side-wall (SW) etching with low damage is a key process for the next generation of MOSFETs and FinFETs. To predict and control the damage, we have developed a SiN etching simulation technique for CHxFy/Ar/O2 plasma processes using a three-dimensional (3D) voxel model. This model includes new concepts for the gas transportation in the pattern, detailed surface reactions on the SiN reactive layer divided into several thin slabs and C-F polymer layer dependent on the H/N ratio, and use of ``smart voxels''. We successfully predicted the etching properties such as the etch rate, polymer layer thickness, and selectivity for Si, SiO2, and SiN films along with process variations and demonstrated the 3D damage distribution time-dependently during SW etching on MOSFETs and FinFETs. We confirmed that a large amount of Si damage was caused in the source/drain region with the passage of time in spite of the existing SiO2 layer of 15 nm in the over etch step and the Si fin having been directly damaged by a large amount of high energy H during the removal step of the parasitic fin spacer leading to Si fin damage to a depth of 14 to 18 nm. By analyzing the results of these simulations and our previous simulations, we found that it is important to carefully control the dose of high energy H, incident energy of H, polymer layer thickness, and over-etch time considering the effects of the pattern structure, chamber-wall condition, and wafer open area ratio. In collaboration with Masanaga Fukasawa and Tetsuya Tatsumi, Sony Corporation. We thank Mr. T. Shigetoshi and Mr. T. Kinoshita of Sony Corporation for their assistance with the experiments.

  15. Strategic Plan for Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Rich Johnson; Kimberlyn C. Mousseau; Hyung Lee

    2011-09-01

    NE-KAMS knowledge base will assist computational analysts, physics model developers, experimentalists, nuclear reactor designers, and federal regulators by: (1) Establishing accepted standards, requirements and best practices for V&V and UQ of computational models and simulations, (2) Establishing accepted standards and procedures for qualifying and classifying experimental and numerical benchmark data, (3) Providing readily accessible databases for nuclear energy related experimental and numerical benchmark data that can be used in V&V assessments and computational methods development, (4) Providing a searchable knowledge base of information, documents and data on V&V and UQ, and (5) Providing web-enabled applications, tools and utilities for V&V and UQ activities, data assessment and processing, and information and data searches. From its inception, NE-KAMS will directly support nuclear energy research, development and demonstration programs within the U.S. Department of Energy (DOE), including the Consortium for Advanced Simulation of Light Water Reactors (CASL), the Nuclear Energy Advanced Modeling and Simulation (NEAMS), the Light Water Reactor Sustainability (LWRS), the Small Modular Reactors (SMR), and the Next Generation Nuclear Power Plant (NGNP) programs. These programs all involve computational modeling and simulation (M&S) of nuclear reactor systems, components and processes, and it is envisioned that NE-KAMS will help to coordinate and facilitate collaboration and sharing of resources and expertise for V&V and UQ across these programs. In addition, from the outset, NE-KAMS will support the use of computational M&S in the nuclear industry by developing guidelines and recommended practices aimed at quantifying the uncertainty and assessing the applicability of existing analysis models and methods. The NE-KAMS effort will initially focus on supporting the use of computational fluid dynamics (CFD) and thermal hydraulics (T/H) analysis for M&S of nuclear

  16. Microwave Processing of Simulated Advanced Nuclear Fuel Pellets

    SciTech Connect

    D.E. Clark; D.C. Folz

    2010-08-29

    Throughout the three-year project funded by the Department of Energy (DOE) and lead by Virginia Tech (VT), project tasks were modified by consensus to fit the changing needs of the DOE with respect to developing new inert matrix fuel processing techniques. The focus throughout the project was on the use of microwave energy to sinter fully stabilized zirconia pellets using microwave energy and to evaluate the effectiveness of techniques that were developed. Additionally, the research team was to propose fundamental concepts as to processing radioactive fuels based on the effectiveness of the microwave process in sintering the simulated matrix material.

  17. Full f gyrokinetic method for particle simulation of tokamak transport

    SciTech Connect

    Heikkinen, J.A. Janhunen, S.J.; Kiviniemi, T.P.; Ogando, F.

    2008-05-10

    A gyrokinetic particle-in-cell approach with direct implicit construction of the coefficient matrix of the Poisson equation from ion polarization and electron parallel nonlinearity is described and applied in global electrostatic toroidal plasma transport simulations. The method is applicable for calculation of the evolution of particle distribution function f including as special cases strong plasma pressure profile evolution by transport and formation of neoclassical flows. This is made feasible by full f formulation and by recording the charge density changes due to the ion polarization drift and electron acceleration along the local magnetic field while particles are advanced. The code has been validated against the linear predictions of the unstable ion temperature gradient mode growth rates and frequencies. Convergence and saturation in both turbulent and neoclassical limit of the ion heat conductivity is obtained with numerical noise well suppressed by a sufficiently large number of simulation particles. A first global full f validation of the neoclassical radial electric field in the presence of turbulence for a heated collisional tokamak plasma is obtained. At high Mach number (M{sub p}{approx}1) of the poloidal flow, the radial electric field is significantly enhanced over the standard neoclassical prediction. The neoclassical radial electric field together with the related GAM oscillations is found to regulate the turbulent heat and particle diffusion levels particularly strongly in a large aspect ratio tokamak at low plasma current.

  18. Some Specific CASL Requirements for Advanced Multiphase Flow Simulation of Light Water Reactors

    SciTech Connect

    R. A. Berry

    2010-11-01

    Because of the diversity of physical phenomena occuring in boiling, flashing, and bubble collapse, and of the length and time scales of LWR systems, it is imperative that the models have the following features: • Both vapor and liquid phases (and noncondensible phases, if present) must be treated as compressible. • Models must be mathematically and numerically well-posed. • The models methodology must be multi-scale. A fundamental derivation of the multiphase governing equation system, that should be used as a basis for advanced multiphase modeling in LWR coolant systems, is given in the Appendix using the ensemble averaging method. The remainder of this work focuses specifically on the compressible, well-posed, and multi-scale requirements of advanced simulation methods for these LWR coolant systems, because without these are the most fundamental aspects, without which widespread advancement cannot be claimed. Because of the expense of developing multiple special-purpose codes and the inherent inability to couple information from the multiple, separate length- and time-scales, efforts within CASL should be focused toward development of a multi-scale approaches to solve those multiphase flow problems relevant to LWR design and safety analysis. Efforts should be aimed at developing well-designed unified physical/mathematical and high-resolution numerical models for compressible, all-speed multiphase flows spanning: (1) Well-posed general mixture level (true multiphase) models for fast transient situations and safety analysis, (2) DNS (Direct Numerical Simulation)-like models to resolve interface level phenmena like flashing and boiling flows, and critical heat flux determination (necessarily including conjugate heat transfer), and (3) Multi-scale methods to resolve both (1) and (2) automatically, depending upon specified mesh resolution, and to couple different flow models (single-phase, multiphase with several velocities and pressures, multiphase with single

  19. Advanced solid elements for sheet metal forming simulation

    NASA Astrophysics Data System (ADS)

    Mataix, Vicente; Rossi, Riccardo; Oñate, Eugenio; Flores, Fernando G.

    2016-08-01

    The solid-shells are an attractive kind of element for the simulation of forming processes, due to the fact that any kind of generic 3D constitutive law can be employed without any additional hypothesis. The present work consists in the improvement of a triangular prism solid-shell originally developed by Flores[2, 3]. The solid-shell can be used in the analysis of thin/thick shell, undergoing large deformations. The element is formulated in total Lagrangian formulation, and employs the neighbour (adjacent) elements to perform a local patch to enrich the displacement field. In the original formulation a modified right Cauchy-Green deformation tensor (C) is obtained; in the present work a modified deformation gradient (F) is obtained, which allows to generalise the methodology and allows to employ the Pull-Back and Push-Forwards operations. The element is based in three modifications: (a) a classical assumed strain approach for transverse shear strains (b) an assumed strain approach for the in-plane components using information from neighbour elements and (c) an averaging of the volumetric strain over the element. The objective is to use this type of elements for the simulation of shells avoiding transverse shear locking, improving the membrane behaviour of the in-plane triangle and to handle quasi-incompressible materials or materials with isochoric plastic flow.

  20. Simulation models and designs for advanced Fischer-Tropsch technology

    SciTech Connect

    Choi, G.N.; Kramer, S.J.; Tam, S.S.

    1995-12-31

    Process designs and economics were developed for three grass-roots indirect Fischer-Tropsch coal liquefaction facilities. A baseline and an alternate upgrading design were developed for a mine-mouth plant located in southern Illinois using Illinois No. 6 coal, and one for a mine-mouth plane located in Wyoming using Power River Basin coal. The alternate design used close-coupled ZSM-5 reactors to upgrade the vapor stream leaving the Fischer-Tropsch reactor. ASPEN process simulation models were developed for all three designs. These results have been reported previously. In this study, the ASPEN process simulation model was enhanced to improve the vapor/liquid equilibrium calculations for the products leaving the slurry bed Fischer-Tropsch reactors. This significantly improved the predictions for the alternate ZSM-5 upgrading design. Another model was developed for the Wyoming coal case using ZSM-5 upgrading of the Fischer-Tropsch reactor vapors. To date, this is the best indirect coal liquefaction case. Sensitivity studies showed that additional cost reductions are possible.

  1. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ,R.E; CANDY,J; HINTON,F.L; ESTRADA-MILA,C; KINSEY,J.E

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or globally with physical profile variation. Bohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, are illustrated.

  2. Computer Simulation Methods for Defect Configurations and Nanoscale Structures

    SciTech Connect

    Gao, Fei

    2010-01-01

    This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

  3. Bluff Body Flow Simulation Using a Vortex Element Method

    SciTech Connect

    Anthony Leonard; Phillippe Chatelain; Michael Rebel

    2004-09-30

    Heavy ground vehicles, especially those involved in long-haul freight transportation, consume a significant part of our nation's energy supply. it is therefore of utmost importance to improve their efficiency, both to reduce emissions and to decrease reliance on imported oil. At highway speeds, more than half of the power consumed by a typical semi truck goes into overcoming aerodynamic drag, a fraction which increases with speed and crosswind. Thanks to better tools and increased awareness, recent years have seen substantial aerodynamic improvements by the truck industry, such as tractor/trailer height matching, radiator area reduction, and swept fairings. However, there remains substantial room for improvement as understanding of turbulent fluid dynamics grows. The group's research effort focused on vortex particle methods, a novel approach for computational fluid dynamics (CFD). Where common CFD methods solve or model the Navier-Stokes equations on a grid which stretches from the truck surface outward, vortex particle methods solve the vorticity equation on a Lagrangian basis of smooth particles and do not require a grid. They worked to advance the state of the art in vortex particle methods, improving their ability to handle the complicated, high Reynolds number flow around heavy vehicles. Specific challenges that they have addressed include finding strategies to accurate capture vorticity generation and resultant forces at the truck wall, handling the aerodynamics of spinning bodies such as tires, application of the method to the GTS model, computation time reduction through improved integration methods, a closest point transform for particle method in complex geometrics, and work on large eddy simulation (LES) turbulence modeling.

  4. Simulation and ground testing with the Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2005-01-01

    The Advanced Video Guidance Sensor (AVGS), an active sensor system that provides near-range 6-degree-of-freedom sensor data, has been developed as part of an automatic rendezvous and docking system for the Demonstration of Autonomous Rendezvous Technology (DART). The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state imager to detect the light returned from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The development of the sensor, through initial prototypes, final prototypes, and three flight units, has required a great deal of testing at every phase, and the different types of testing, their effectiveness, and their results, are presented in this paper, focusing on the testing of the flight units. Testing has improved the sensor's performance.

  5. Exploring the use of standardized patients for simulation-based learning in preparing advanced practice nurses.

    PubMed

    Kowitlawakul, Yanika; Chow, Yeow Leng; Salam, Zakir Hussian Abdul; Ignacio, Jeanette

    2015-07-01

    The use of standardized patients for simulation-based learning was integrated into the Master of Nursing curriculum in the 2012-2013 academic year. The study aimed to explore the Master of Nursing students' experiences with and perceptions of using standardized patients in simulations, and to identify the students' learning needs in preparing to become advanced practice nurses. The study adopted an exploratory descriptive qualitative design, using a focus group interview. The study was conducted at a university in Singapore. Seven Master of Nursing students who were enrolled in the Acute Care Track of Master of Nursing program in the 2012-2013 academic year participated in the study. The data were gathered at the end of the first semester. Content analysis was used to analyze the data. Three main categories - usefulness, clinical limitations, and realism - were identified in the study. The results revealed that the students felt using standardized patients was useful and realistic for developing skills in history taking, communication, and responding to an emergency situation. On the other hand, they found that the standardized patients were limited in providing critical signs and symptoms of case scenarios. To meet the learning objectives, future development and integration of standardized patients in the Master of Nursing curriculum might need to be considered along with the use of a high-fidelity simulator. This can be an alternative strategy to fill the gaps in each method. Obviously, using standardized patients for simulation-based learning has added value to the students' learning experiences. It is highly recommended that future studies explore the impact of using standardized patients on students' performance in clinical settings.

  6. Methods and Systems for Advanced Spaceport Information Management

    NASA Technical Reports Server (NTRS)

    Fussell, Ronald M. (Inventor); Ely, Donald W. (Inventor); Meier, Gary M. (Inventor); Halpin, Paul C. (Inventor); Meade, Phillip T. (Inventor); Jacobson, Craig A. (Inventor); Blackwell-Thompson, Charlie (Inventor)

    2007-01-01

    Advanced spaceport information management methods and systems are disclosed. In one embodiment, a method includes coupling a test system to the payload and transmitting one or more test signals that emulate an anticipated condition from the test system to the payload. One or more responsive signals are received from the payload into the test system and are analyzed to determine whether one or more of the responsive signals comprises an anomalous signal. At least one of the steps of transmitting, receiving, analyzing and determining includes transmitting at least one of the test signals and the responsive signals via a communications link from a payload processing facility to a remotely located facility. In one particular embodiment, the communications link is an Internet link from a payload processing facility to a remotely located facility (e.g. a launch facility, university, etc.).

  7. Methods and systems for advanced spaceport information management

    NASA Technical Reports Server (NTRS)

    Fussell, Ronald M. (Inventor); Ely, Donald W. (Inventor); Meier, Gary M. (Inventor); Halpin, Paul C. (Inventor); Meade, Phillip T. (Inventor); Jacobson, Craig A. (Inventor); Blackwell-Thompson, Charlie (Inventor)

    2007-01-01

    Advanced spaceport information management methods and systems are disclosed. In one embodiment, a method includes coupling a test system to the payload and transmitting one or more test signals that emulate an anticipated condition from the test system to the payload. One or more responsive signals are received from the payload into the test system and are analyzed to determine whether one or more of the responsive signals comprises an anomalous signal. At least one of the steps of transmitting, receiving, analyzing and determining includes transmitting at least one of the test signals and the responsive signals via a communications link from a payload processing facility to a remotely located facility. In one particular embodiment, the communications link is an Internet link from a payload processing facility to a remotely located facility (e.g. a launch facility, university, etc.).

  8. Using CONFIG for Simulation of Operation of Water Recovery Subsystems for Advanced Control Software Evaluation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv

    2002-01-01

    A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center

  9. Design and Test of Advanced Thermal Simulators for an Alkali Metal-Cooled Reactor Simulator

    NASA Technical Reports Server (NTRS)

    Garber, Anne E.; Dickens, Ricky E.

    2011-01-01

    The Early Flight Fission Test Facility (EFF-TF) at NASA Marshall Space Flight Center (MSFC) has as one of its primary missions the development and testing of fission reactor simulators for space applications. A key component in these simulated reactors is the thermal simulator, designed to closely mimic the form and function of a nuclear fuel pin using electric heating. Continuing effort has been made to design simple, robust, inexpensive thermal simulators that closely match the steady-state and transient performance of a nuclear fuel pin. A series of these simulators have been designed, developed, fabricated and tested individually and in a number of simulated reactor systems at the EFF-TF. The purpose of the thermal simulators developed under the Fission Surface Power (FSP) task is to ensure that non-nuclear testing can be performed at sufficiently high fidelity to allow a cost-effective qualification and acceptance strategy to be used. Prototype thermal simulator design is founded on the baseline Fission Surface Power reactor design. Recent efforts have been focused on the design, fabrication and test of a prototype thermal simulator appropriate for use in the Technology Demonstration Unit (TDU). While designing the thermal simulators described in this paper, effort were made to improve the axial power profile matching of the thermal simulators. Simultaneously, a search was conducted for graphite materials with higher resistivities than had been employed in the past. The combination of these two efforts resulted in the creation of thermal simulators with power capacities of 2300-3300 W per unit. Six of these elements were installed in a simulated core and tested in the alkali metal-cooled Fission Surface Power Primary Test Circuit (FSP-PTC) at a variety of liquid metal flow rates and temperatures. This paper documents the design of the thermal simulators, test program, and test results.

  10. Collaborative simulation method with spatiotemporal synchronization process control

    NASA Astrophysics Data System (ADS)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  11. Advances in direct and diffraction methods for surface structural determination

    NASA Astrophysics Data System (ADS)

    Tong, S. Y.

    1999-08-01

    I describe recent advances in low-energy electron diffraction holography and photoelectron diffraction holography. These are direct methods for determining the surface structure. I show that for LEED and PD spectra taken in an energy and angular mesh, the relative phase between the reference wave and the scattered wave has a known geometric form if the spectra are always taken from within a small angular cone in the near backscattering direction. By using data in the backscattering small cone at each direction of interest, a simple algorithm is developed to invert the spectra and extract object atomic positions with no input of calculated dynamic factors. I also describe the use of a convergent iterative method of PD and LEED. The computation time of this method scales as N2, where N is the dimension of the propagator matrix, rather than N3 as in conventional Gaussian substitutional methods. Both the Rehr-Albers separable-propagator cluster approach and the slab-type non-separable approach can be cast in the new iterative form. With substantial savings in computational time and no loss in numerical accuracy, this method is very useful in applications of multiple scattering theory, particularly for systems involving either very large unit cells (>300 atoms) or where no long-range order is present.

  12. Preliminary simulation of an advanced, hingless rotor XV-15 tilt-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Mcveigh, M. A.

    1976-01-01

    The feasibility of the tilt-rotor concept was verified through investigation of the performance, stability and handling qualities of the XV-15 tilt rotor. The rotors were replaced by advanced-technology fiberglass/composite hingless rotors of larger diameter, combined with an advanced integrated fly-by-wire control system. A parametric simulation model of the HRXV-15 was developed, model was used to define acceptable preliminary ranges of primary and secondary control schedules as functions of the flight parameters, to evaluate performance, flying qualities and structural loads, and to have a Boeing-Vertol pilot conduct a simulated flight test evaluation of the aircraft.

  13. State of the Art Assessment of Simulation in Advanced Materials Development

    NASA Technical Reports Server (NTRS)

    Wise, Kristopher E.

    2008-01-01

    Advances in both the underlying theory and in the practical implementation of molecular modeling techniques have increased their value in the advanced materials development process. The objective is to accelerate the maturation of emerging materials by tightly integrating modeling with the other critical processes: synthesis, processing, and characterization. The aims of this report are to summarize the state of the art of existing modeling tools and to highlight a number of areas in which additional development is required. In an effort to maintain focus and limit length, this survey is restricted to classical simulation techniques including molecular dynamics and Monte Carlo simulations.

  14. Methodological advances: using greenhouses to simulate climate change scenarios.

    PubMed

    Morales, F; Pascual, I; Sánchez-Díaz, M; Aguirreolea, J; Irigoyen, J J; Goicoechea, N; Antolín, M C; Oyarzun, M; Urdiain, A

    2014-09-01

    Human activities are increasing atmospheric CO2 concentration and temperature. Related to this global warming, periods of low water availability are also expected to increase. Thus, CO2 concentration, temperature and water availability are three of the main factors related to climate change that potentially may influence crops and ecosystems. In this report, we describe the use of growth chamber - greenhouses (GCG) and temperature gradient greenhouses (TGG) to simulate climate change scenarios and to investigate possible plant responses. In the GCG, CO2 concentration, temperature and water availability are set to act simultaneously, enabling comparison of a current situation with a future one. Other characteristics of the GCG are a relative large space of work, fine control of the relative humidity, plant fertirrigation and the possibility of light supplementation, within the photosynthetic active radiation (PAR) region and/or with ultraviolet-B (UV-B) light. In the TGG, the three above-mentioned factors can act independently or in interaction, enabling more mechanistic studies aimed to elucidate the limiting factor(s) responsible for a given plant response. Examples of experiments, including some aimed to study photosynthetic acclimation, a phenomenon that leads to decreased photosynthetic capacity under long-term exposures to elevated CO2, using GCG and TGG are reported.

  15. Advanced Numerical Methods for Simulating Nonlinear Multirate Lumped Parameter Models

    DTIC Science & Technology

    1991-05-01

    power from one frequency to another. Even the ship service system onboard mechanical drive ships can experience frequency fluctuations lasting up to 2...none can be found and the circuitgroup_singular flag is zero Warn user that a singular system may exist with the group nodes. If none can be found then...In fact, the flow variable should normally equal zero if the rest of the circuit is indeed linearly dependent. As a convenience to the user , WAVESIM

  16. CHARMM-GUI PDB manipulator for advanced modeling and simulations of proteins containing nonstandard residues.

    PubMed

    Jo, Sunhwan; Cheng, Xi; Islam, Shahidul M; Huang, Lei; Rui, Huan; Zhu, Allen; Lee, Hui Sun; Qi, Yifei; Han, Wei; Vanommeslaeghe, Kenno; MacKerell, Alexander D; Roux, Benoît; Im, Wonpil

    2014-01-01

    CHARMM-GUI, http://www.charmm-gui.org, is a web-based graphical user interface to prepare molecular simulation systems and input files to facilitate the usage of common and advanced simulation techniques. Since it is originally developed in 2006, CHARMM-GUI has been widely adopted for various purposes and now contains a number of different modules designed to setup a broad range of simulations including free energy calculation and large-scale coarse-grained representation. Here, we describe functionalities that have recently been integrated into CHARMM-GUI PDB Manipulator, such as ligand force field generation, incorporation of methanethiosulfonate spin labels and chemical modifiers, and substitution of amino acids with unnatural amino acids. These new features are expected to be useful in advanced biomolecular modeling and simulation of proteins.

  17. Advances in the Surface Renewal Flux Measurement Method

    NASA Astrophysics Data System (ADS)

    Shapland, T. M.; McElrone, A.; Paw U, K. T.; Snyder, R. L.

    2011-12-01

    The measurement of ecosystem-scale energy and mass fluxes between the planetary surface and the atmosphere is crucial for understanding geophysical processes. Surface renewal is a flux measurement technique based on analyzing the turbulent coherent structures that interact with the surface. It is a less expensive technique because it does not require fast-response velocity measurements, but only a fast-response scalar measurement. It is therefore also a useful tool for the study of the global cycling of trace gases. Currently, surface renewal requires calibration against another flux measurement technique, such as eddy covariance, to account for the linear bias of its measurements. We present two advances in the surface renewal theory and methodology that bring the technique closer to becoming a fully independent flux measurement method. The first advance develops the theory of turbulent coherent structure transport associated with the different scales of coherent structures. A novel method was developed for identifying the scalar change rate within structures at different scales. Our results suggest that for canopies less than one meter in height, the second smallest coherent structure scale dominates the energy and mass flux process. Using the method for resolving the scalar exchange rate of the second smallest coherent structure scale, calibration is unnecessary for surface renewal measurements over short canopies. This study forms the foundation for analysis over more complex surfaces. The second advance is a sensor frequency response correction for measuring the sensible heat flux via surface renewal. Inexpensive fine-wire thermocouples are frequently used to record high frequency temperature data in the surface renewal technique. The sensible heat flux is used in conjunction with net radiation and ground heat flux measurements to determine the latent heat flux as the energy balance residual. The robust thermocouples commonly used in field experiments

  18. Advanced superposition methods for high speed turbopump vibration analysis

    NASA Technical Reports Server (NTRS)

    Nielson, C. E.; Campany, A. D.

    1981-01-01

    The small, high pressure Mark 48 liquid hydrogen turbopump was analyzed and dynamically tested to determine the cause of high speed vibration at an operating speed of 92,400 rpm. This approaches the design point operating speed of 95,000 rpm. The initial dynamic analysis in the design stage and subsequent further analysis of the rotor only dynamics failed to predict the vibration characteristics found during testing. An advanced procedure for dynamics analysis was used in this investigation. The procedure involves developing accurate dynamic models of the rotor assembly and casing assembly by finite element analysis. The dynamically instrumented assemblies are independently rap tested to verify the analytical models. The verified models are then combined by modal superposition techniques to develop a completed turbopump model where dynamic characteristics are determined. The results of the dynamic testing and analysis obtained are presented and methods of moving the high speed vibration characteristics to speeds above the operating range are recommended. Recommendations for use of these advanced dynamic analysis procedures during initial design phases are given.

  19. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  20. Review: Advances in delta-subsidence research using satellite methods

    NASA Astrophysics Data System (ADS)

    Higgins, Stephanie A.

    2016-05-01

    Most of the world's major river deltas are sinking relative to local sea level. The effects of subsidence can include aquifer salinization, infrastructure damage, increased vulnerability to flooding and storm surges, and permanent inundation of low-lying land. Consequently, determining the relative importance of natural vs. anthropogenic pressures in driving delta subsidence is a topic of ongoing research. This article presents a review of knowledge with respect to delta surface-elevation loss. The field is rapidly advancing due to applications of space-based techniques: InSAR (interferometric synthetic aperture radar), GPS (global positioning system), and satellite ocean altimetry. These techniques have shed new light on a variety of subsidence processes, including tectonics, isostatic adjustment, and the spatial and temporal variability of sediment compaction. They also confirm that subsidence associated with fluid extraction can outpace sea-level rise by up to two orders of magnitude, resulting in effective sea-level rise that is one-hundred times faster than the global average rate. In coming years, space-based and airborne instruments will be critical in providing near-real-time monitoring to facilitate management decisions in sinking deltas. However, ground-based observations continue to be necessary for generating complete measurements of surface-elevation change. Numerical modeling should seek to simulate couplings between subsidence processes for greater predictive power.

  1. Integration of isothermal amplification methods in microfluidic devices: Recent advances.

    PubMed

    Giuffrida, Maria Chiara; Spoto, Giuseppe

    2017-04-15

    The integration of nucleic acids detection assays in microfluidic devices represents a highly promising approach for the development of convenient, cheap and efficient diagnostic tools for clinical, food safety and environmental monitoring applications. Such tools are expected to operate at the point-of-care and in resource-limited settings. The amplification of the target nucleic acid sequence represents a key step for the development of sensitive detection protocols. The integration in microfluidic devices of the most popular technology for nucleic acids amplifications, polymerase chain reaction (PCR), is significantly limited by the thermal cycling needed to obtain the target sequence amplification. This review provides an overview of recent advances in integration of isothermal amplification methods in microfluidic devices. Isothermal methods, that operate at constant temperature, have emerged as promising alternative to PCR and greatly simplify the implementation of amplification methods in point-of-care diagnostic devices and devices to be used in resource-limited settings. Possibilities offered by isothermal methods for digital droplet amplification are discussed.

  2. Advances in nondestructive evaluation methods for inspection of refractory concretes

    SciTech Connect

    Ellingson, W. A.

    1980-01-01

    Refractory concrete linings are essential to protect steel pressure boundaries from high-temperature agressive erosive/corrosive environments. Castable refractory concretes have been gaining more acceptance as information about their performance increases. Economic factors, however, have begun to impose high demands on the reliability of refractory materials. Advanced nondestructive evaluation methods are being developed to assist the refractory user. Radiographic techniques, thermography, acoustic-emission detection, and interferometry have been shown to yield information on the structural status of refractory concrete. Methods using /sup 60/Co radiation sources are capable of yielding measurements of refractory wear rate as well as images of cracks and/or voids in pre- and post-fired refractory linings up to 60 cm thick. Thermographic (infrared) images serve as a qualitative indicator of refractory spalling, but quantitative measurements are difficult to obtain from surface-temperature mapping. Acoustic emission has been shown to be a qualitative indicator of thermomechanical degradation of thick panels of 50 and 95% Al/sub 2/O/sub 3/ during initial heating and cooling at rates of 100 to 220/sup 0/C/h. Laser interferometry methods have been shown to be capable of complete mappings of refractory lining thicknesses. This paper will present results obtained from laboratory and field applications of these methods in petrochemical, steel, and coal-conversion plants.

  3. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  4. Exploration of Advanced Probabilistic and Stochastic Design Methods

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.

    2003-01-01

    The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and

  5. Advanced Flight Simulator: Utilization in A-10 Conversion and Air-to-Surface Attack Training.

    DTIC Science & Technology

    1981-01-01

    CLASSIFIC.TION OF THIS PAGE(1Whl Data Emiterd) Item 20 (Continued) -" blocks of instruction on the Advanced Simulator for Pilot Training ( ASPT ). The first...training, the transfer of training from the ASPT to the A-10 is nearly 100 percent. therefore, in the early phases of AiS training, one simulator... ASPT ) could be suitably modified, an alternative to initially dangerous and expensive aircraft training would exist which also offered considerable

  6. Advanced Simulator for Pilot Training: Design of Automated Performance Measurement System

    DTIC Science & Technology

    1980-08-01

    reverse aide if necessary and identify by block number) pilot pertormance measurement Advanced Simulator for Pilot Training ( ASPT ) Aircrew performance...Simulator for Pilot Training ( ASPT ). This report documents that development effort and describes the current status of the measurement system. It was...Continued): cj;? /To date, the following scenarios have been implemented on the ASPT : (a)1’nusition Tasks - Straight and Level, Airspeed Changes, Turns

  7. Review of wind simulation methods for horizontal-axis wind turbine analysis

    NASA Astrophysics Data System (ADS)

    Powell, D. C.; Connell, J. R.

    1986-06-01

    This report reviews three reports on simulation of winds for use in wind turbine fatigue analysis. The three reports are presumed to represent the state of the art. The Purdue and Sandia methods simulate correlated wind data at two points rotating as on the rotor of a horizontal-axis wind turbine. The PNL method at present simulates only one point, which rotates either as on a horizontal-axis wind turbine blade or as on a vertical-axis wind turbine blade. The spectra of simulated data are presented from the Sandia and PNL models under comparable input conditions, and the energy calculated in the rotational spikes in the spectra by the two models is compared. Although agreement between the two methods is not impressive at this time, improvement of the Sandia and PNL methods is recommended as the best way to advance the state of the art. Physical deficiencies of the models are cited in the report and technical recommendations are made for improvement. The report also reviews two general methods for simulating single-point data, called the harmonic method and the white noise method. The harmonic method, which is the basis of all three specific methods reviewed, is recommended over the white noise method in simulating winds for wind turbine analysis.

  8. Interactive methods for exploring particle simulation data

    SciTech Connect

    Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.

    2004-05-01

    In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.

  9. Novel Methods for Electromagnetic Simulation and Design

    DTIC Science & Technology

    2016-08-03

    average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and maintaining the data...perfectly conducting half-space, for the simulation of layered and microstructured metamaterials, and for the analysis of time -domain integral...Lorenz-Mie-Debye formalism for the Maxwell equa- tions to the time -domain. We showed that the problem of scattering from a perfectly con- ducting

  10. Advanced Motion Compensation Methods for Intravital Optical Microscopy

    PubMed Central

    Vinegoni, Claudio; Lee, Sungon; Feruglio, Paolo Fumene; Weissleder, Ralph

    2013-01-01

    Intravital microscopy has emerged in the recent decade as an indispensible imaging modality for the study of the micro-dynamics of biological processes in live animals. Technical advancements in imaging techniques and hardware components, combined with the development of novel targeted probes and new mice models, have enabled us to address long-standing questions in several biology areas such as oncology, cell biology, immunology and neuroscience. As the instrument resolution has increased, physiological motion activities have become a major obstacle that prevents imaging live animals at resolutions analogue to the ones obtained in vitro. Motion compensation techniques aim at reducing this gap and can effectively increase the in vivo resolution. This paper provides a technical review of some of the latest developments in motion compensation methods, providing organ specific solutions. PMID:24273405

  11. Computational methods of the Advanced Fluid Dynamics Model

    SciTech Connect

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  12. Numerical Evaluation of Fluid Mixing Phenomena in Boiling Water Reactor Using Advanced Interface Tracking Method

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Takase, Kazuyuki

    Thermal-hydraulic design of the current boiling water reactor (BWR) is performed with the subchannel analysis codes which incorporated the correlations based on empirical results including actual-size tests. Then, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. In this situation, development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason, we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, a detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. In this paper, firstly, we tried to verify the TPFIT code by comparing it with the existing 2-channel air-water mixing experimental results. Secondary, the TPFIT code was applied to simulation of steam-water two-phase flow in a model of two subchannels of a current BWRs and FLWRs rod bundle. The fluid mixing was observed at a gap between the subchannels. The existing two-phase flow correlation for fluid mixing is evaluated using detailed numerical simulation data. This data indicates that pressure difference between fluid channels is responsible for the fluid mixing, and thus the effects of the time average pressure difference and fluctuations must be incorporated in the two-phase flow correlation for fluid mixing. When inlet quality ratio of subchannels is relatively large, it is understood that evaluation precision of the existing two-phase flow correlations for fluid mixing are relatively low.

  13. [Research advances in simulating land water-carbon coupling].

    PubMed

    Liu, Ning; Sun, Peng-Sen; Liu, Shi-Rong

    2012-11-01

    The increasing demand of adaptive management of land, forest, and water resources under the background of global change and water resources crisis has promoted the comprehensive study of coupling ecosystem water and carbon cycles and their restrictive relations. To construct the water-carbon coupling model and to approach the ecosystem water-carbon balance and its interactive response mechanisms under climate change at multiple spatiotemporal scales is nowadays a major concern. After reviewing the coupling relationships of water and carbon at various scales, this paper explored the implications and estimation methods of the key processes and related parameters of water-carbon coupling, the construction of evapotranspiration model at large scale based on RS, and the importance of this model in water-carbon coupling researches. The applications of assimilative multivariate data in water-carbon coupling researches under future climate change scenarios were also prospected.

  14. Technology, Pedagogy, and Epistemology: Opportunities and Challenges of Using Computer Modeling and Simulation Tools in Elementary Science Methods

    ERIC Educational Resources Information Center

    Schwarz, Christina V.; Meyer, Jason; Sharma, Ajay

    2007-01-01

    This study infused computer modeling and simulation tools in a 1-semester undergraduate elementary science methods course to advance preservice teachers' understandings of computer software use in science teaching and to help them learn important aspects of pedagogy and epistemology. Preservice teachers used computer modeling and simulation tools…

  15. Numerical modeling of spray combustion with an advanced VOF method

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  16. WinSRFR: Current Advances in Software for Surface Irrigation Simulation and Analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Significant advances have been made over the last decade in the development of software for surface irrigation analysis. WinSRFR is an integrated tool that combines unsteady flow simulation with tools for system evaluation/parameter estimation, system design, and for operational optimization. Ongoi...

  17. Battery Performance of ADEOS (Advanced Earth Observing Satellite) and Ground Simulation Test Results

    NASA Technical Reports Server (NTRS)

    Koga, K.; Suzuki, Y.; Kuwajima, S.; Kusawake, H.

    1997-01-01

    The Advanced Earth Observing Satellite (ADEOS) is developed with the aim of establishment of platform technology for future spacecraft and inter-orbit communication technology for the transmission of earth observation data. ADEOS uses 5 batteries, consists of two packs. This paper describes, using graphs and tables, the ground simulation tests and results that are carried to determine the performance of the ADEOS batteries.

  18. ADVANCED UTILITY SIMULATION MODEL, DESCRIPTION OF THE NATIONAL LOOP (VERSION 3.0)

    EPA Science Inventory

    The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...

  19. Advances in Time Estimation Methods for Molecular Data.

    PubMed

    Kumar, Sudhir; Hedges, S Blair

    2016-04-01

    Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data

  20. A graphical workstation based part-task flight simulator for preliminary rapid evaluation of advanced displays

    NASA Technical Reports Server (NTRS)

    Wanke, Craig; Kuchar, James; Hahn, Edward; Pritchett, Amy; Hansman, R. J.

    1992-01-01

    Advances in avionics and display technology are significantly changing the cockpit environment in current transport aircraft. The MIT Aeronautical Systems Lab (ASL) has developed a part-task flight simulator specifically to study the effects of these new technologies on flight crew situational awareness and performance. The simulator is based on a commercially-available graphics workstation, and can be rapidly reconfigured to meet the varying demands of experimental studies. The simulator has been successfully used to evaluate graphical microburst alerting displays, electronic instrument approach plates, terrain awareness and alerting displays, and ATC routing amendment delivery through digital datalinks.

  1. A graphical workstation based part-task flight simulator for preliminary rapid evaluation of advanced displays

    NASA Technical Reports Server (NTRS)

    Wanke, Craig; Kuchar, James; Hahn, Edward; Pritchett, A.; Hansman, R. John

    1994-01-01

    Advances in avionics and display technology are significantly changing the cockpit environment in current transport aircraft. The MIT Aeronautical Systems Lab (ASL) developed a part-task flight simulator specifically to study the effects of these new technologies on flight crew situational awareness and performance. The simulator is based on a commercially-available graphics workstation, and can be rapidly reconfigured to meet the varying demands of experimental studies. The simulator was successfully used to evaluate graphical microbursts alerting displays, electronic instrument approach plates, terrain awareness and alerting displays, and ATC routing amendment delivery through digital datalinks.

  2. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  3. The frontal method in hydrodynamics simulations

    USGS Publications Warehouse

    Walters, R.A.

    1980-01-01

    The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.

  4. [Contemporary methods of treatment in local advanced prostate cancer].

    PubMed

    Brzozowska, Anna; Mazurkiewicz, Maria; Starosławska, Elzbieta; Stasiewicz, Dominika; Mocarska, Agnieszka; Burdan, Franciszek

    2012-10-01

    The prostate cancer is one of the most often cancers amongst males. Its frequency is increasing with age. Thanks to widespread of screening denomination of specific prostate specific antigen (PSA), ultrasonography including the one in transrectal (TRUS), computed tomography, magnetic resonance and especially the awareness of society, the number of patients with low local advance of illness is increasing. The basic method of treatment in such cases is still the surgical removal of prostate with seminal bladder or radiotherapy. To this purpose tele-(IMRT, VMAT) or brachytherapy (J125, Ir192, Pa103) is used. In patients with higher risk of progression the radiotherapy may be associated with hormonotherapy (total androgen blockage-LH-RH analog and androgen). Despite numerous clinical researches conducted there is still no selection of optimal sequence of particular methods. Moreover, no explicit effectiveness was determined. The general rule of treatment in patients suffering from prostate cancer still remains individual selection of therapeutic treatment depending on the age of a patient, general condition and especially patient's general preferences. In case of elderly patients and patients with low risk of progression, recommendation of direct observation including systematical PSA denomination, clinical transrectal examination, TRUS, MR of smaller pelvis or scintigraphy of the whole skeleton may be considered.

  5. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  6. An innovative exercise method to simulate orbital EVA work - Applications to PLSS automatic controls

    NASA Technical Reports Server (NTRS)

    Lantz, Renee; Vykukal, H.; Webbon, Bruce

    1987-01-01

    An exercise method has been proposed which may satisfy the current need for a laboratory simulation representative of muscular, cardiovascular, respiratory, and thermoregulatory responses to work during orbital extravehicular activity (EVA). The simulation incorporates arm crank ergometry with a unique body support mechanism that allows all body position stabilization forces to be reacted at the feet. By instituting this exercise method in laboratory experimentation, an advanced portable life support system (PLSS) thermoregulatory control system can be designed to more accurately reflect the specific work requirements of orbital EVA.

  7. Development of a VOR/DME model for an advanced concepts simulator

    NASA Technical Reports Server (NTRS)

    Steinmetz, G. G.; Bowles, R. L.

    1984-01-01

    The report presents a definition of a VOR/DME, airborne and ground systems simulation model. This description was drafted in response to a need in the creation of an advanced concepts simulation in which flight station design for the 1980 era can be postulated and examined. The simulation model described herein provides a reasonable representation of VOR/DME station in the continental United States including area coverage by type and noise errors. The detail in which the model has been cast provides the interested researcher with a moderate fidelity level simulator tool for conducting research and evaluation of navigator algorithms. Assumptions made within the development are listed and place certain responsibilities (data bases, communication with other simulation modules, uniform round earth, etc.) upon the researcher.

  8. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  9. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  10. Comparative Assessment of Advanced Gay Hydrate Production Methods

    SciTech Connect

    M. D. White; B. P. McGrail; S. K. Wurstner

    2009-06-30

    Displacing natural gas and petroleum with carbon dioxide is a proven technology for producing conventional geologic hydrocarbon reservoirs, and producing additional yields from abandoned or partially produced petroleum reservoirs. Extending this concept to natural gas hydrate production offers the potential to enhance gas hydrate recovery with concomitant permanent geologic sequestration. Numerical simulation was used to assess a suite of carbon dioxide injection techniques for producing gas hydrates from a variety of geologic deposit types. Secondary hydrate formation was found to inhibit contact of the injected CO{sub 2} regardless of injectate phase state, thus diminishing the exchange rate due to pore clogging and hydrate zone bypass of the injected fluids. Additional work is needed to develop methods of artificially introducing high-permeability pathways in gas hydrate zones if injection of CO{sub 2} in either gas, liquid, or micro-emulsion form is to be more effective in enhancing gas hydrate production rates.

  11. The Osseus platform: a prototype for advanced web-based distributed simulation

    NASA Astrophysics Data System (ADS)

    Franceschini, Derrick; Riecken, Mark

    2016-05-01

    Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.

  12. The role of numerical simulation for the development of an advanced HIFU system

    NASA Astrophysics Data System (ADS)

    Okita, Kohei; Narumi, Ryuta; Azuma, Takashi; Takagi, Shu; Matumoto, Yoichiro

    2014-10-01

    High-intensity focused ultrasound (HIFU) has been used clinically and is under clinical trials to treat various diseases. An advanced HIFU system employs ultrasound techniques for guidance during HIFU treatment instead of magnetic resonance imaging in current HIFU systems. A HIFU beam imaging for monitoring the HIFU beam and a localized motion imaging for treatment validation of tissue are introduced briefly as the real-time ultrasound monitoring techniques. Numerical simulations have a great impact on the development of real-time ultrasound monitoring as well as the improvement of the safety and efficacy of treatment in advanced HIFU systems. A HIFU simulator was developed to reproduce ultrasound propagation through the body in consideration of the elasticity of tissue, and was validated by comparison with in vitro experiments in which the ultrasound emitted from the phased-array transducer propagates through the acrylic plate acting as a bone phantom. As the result, the defocus and distortion of the ultrasound propagating through the acrylic plate in the simulation quantitatively agree with that in the experimental results. Therefore, the HIFU simulator accurately reproduces the ultrasound propagation through the medium whose shape and physical properties are well known. In addition, it is experimentally confirmed that simulation-assisted focus control of the phased-array transducer enables efficient assignment of the focus to the target. Simulation-assisted focus control can contribute to design of transducers and treatment planning.

  13. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  14. Striking against bioterrorism with advanced proteomics and reference methods.

    PubMed

    Armengaud, Jean

    2017-01-01

    The intentional use by terrorists of biological toxins as weapons has been of great concern for many years. Among the numerous toxins produced by plants, animals, algae, fungi, and bacteria, ricin is one of the most scrutinized by the media because it has already been used in biocrimes and acts of bioterrorism. Improving the analytical toolbox of national authorities to monitor these potential bioweapons all at once is of the utmost interest. MS/MS allows their absolute quantitation and exhibits advantageous sensitivity, discriminative power, multiplexing possibilities, and speed. In this issue of Proteomics, Gilquin et al. (Proteomics 2017, 17, 1600357) present a robust multiplex assay to quantify a set of eight toxins in the presence of a complex food matrix. This MS/MS reference method is based on scheduled SRM and high-quality standards consisting of isotopically labeled versions of these toxins. Their results demonstrate robust reliability based on rather loose scheduling of SRM transitions and good sensitivity for the eight toxins, lower than their oral median lethal doses. In the face of an increased threat from terrorism, relevant reference assays based on advanced proteomics and high-quality companion toxin standards are reliable and firm answers.

  15. Underwater Photosynthesis of Submerged Plants – Recent Advances and Methods

    PubMed Central

    Pedersen, Ole; Colmer, Timothy D.; Sand-Jensen, Kaj

    2013-01-01

    We describe the general background and the recent advances in research on underwater photosynthesis of leaf segments, whole communities, and plant dominated aquatic ecosystems and present contemporary methods tailor made to quantify photosynthesis and carbon fixation under water. The majority of studies of aquatic photosynthesis have been carried out with detached leaves or thalli and this selectiveness influences the perception of the regulation of aquatic photosynthesis. We thus recommend assessing the influence of inorganic carbon and temperature on natural aquatic communities of variable density in addition to studying detached leaves in the scenarios of rising CO2 and temperature. Moreover, a growing number of researchers are interested in tolerance of terrestrial plants during flooding as torrential rains sometimes result in overland floods that inundate terrestrial plants. We propose to undertake studies to elucidate the importance of leaf acclimation of terrestrial plants to facilitate gas exchange and light utilization under water as these acclimations influence underwater photosynthesis as well as internal aeration of plant tissues during submergence. PMID:23734154

  16. Advances in the analysis of iminocyclitols: Methods, sources and bioavailability.

    PubMed

    Amézqueta, Susana; Torres, Josep Lluís

    2016-05-01

    Iminocyclitols are chemically and metabolically stable, naturally occurring sugar mimetics. Their biological activities make them interesting and extremely promising as both drug leads and functional food ingredients. The first iminocyclitols were discovered using preparative isolation and purification methods followed by chemical characterization using nuclear magnetic resonance spectroscopy. In addition to this classical approach, gas and liquid chromatography coupled to mass spectrometry are increasingly used; they are highly sensitive techniques capable of detecting minute amounts of analytes in a broad spectrum of sources after only minimal sample preparation. These techniques have been applied to identify new iminocyclitols in plants, microorganisms and synthetic mixtures. The separation of iminocyclitol mixtures by chromatography is particularly difficult however, as the most commonly used matrices have very low selectivity for these highly hydrophilic structurally similar molecules. This review critically summarizes recent advances in the analysis of iminocyclitols from plant sources and findings regarding their quantification in dietary supplements and foodstuffs, as well as in biological fluids and organs, from bioavailability studies.

  17. Regenerative medicine: advances in new methods and technologies.

    PubMed

    Park, Dong-Hyuk; Eve, David J

    2009-11-01

    The articles published in the journal Cell Transplantation - The Regenerative Medicine Journal over the last two years reveal the recent and future cutting-edge research in the fields of regenerative and transplantation medicine. 437 articles were published from 2007 to 2008, a 17% increase compared to the 373 articles in 2006-2007. Neuroscience was still the most common section in both the number of articles and the percentage of all manuscripts published. The increasing interest and rapid advance in bioengineering technology is highlighted by tissue engineering and bioartificial organs being ranked second again. For a similar reason, the methods and new technologies section increased significantly compared to the last period. Articles focusing on the transplantation of stem cell lineages encompassed almost 20% of all articles published. By contrast, the non-stem cell transplantation group which is made up primarily of islet cells, followed by biomaterials and fetal neural tissue, etc. comprised less than 15%. Transplantation of cells pre-treated with medicine or gene transfection to prolong graft survival or promote differentiation into the needed phenotype, was prevalent in the transplantation articles regardless of the kind of cells used. Meanwhile, the majority of non-transplantation-based articles were related to new devices for various purposes, characterization of unknown cells, medicines, cell preparation and/or optimization for transplantation (e.g. isolation and culture), and disease pathology.

  18. Constraint methods that accelerate free-energy simulations of biomolecules

    PubMed Central

    MacCallum, Justin L.; Dill, Ken A.

    2015-01-01

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628

  19. Constraint methods that accelerate free-energy simulations of biomolecules

    SciTech Connect

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  20. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  1. Constraint methods that accelerate free-energy simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-01

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  2. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    SciTech Connect

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  3. In-silico simulations of advanced drug delivery systems: what will the future offer?

    PubMed

    Siepmann, Juergen

    2013-09-15

    This commentary enlarges on some of the topics addressed in the Position Paper "Towards more effective advanced drug delivery systems" by Crommelin and Florence (2013). Inter alia, the role of mathematical modeling and computer-assisted device design is briefly addressed in the Position Paper. This emerging and particularly promising field is considered in more depth in this commentary. In fact, in-silico simulations have become of fundamental importance in numerous scientific and related domains, allowing for a better understanding of various phenomena and for facilitated device design. The development of novel prototypes of space shuttles, nuclear power plants and automobiles are just a few examples. In-silico simulations are nowadays also well established in the field of pharmacokinetics/pharmacodynamics (PK/PD) and have become an integral part of the discovery and development process of novel drug products. Since Takeru Higuchi published his seminal equation in 1961 the use of mathematical models for the analysis and optimization of drug delivery systems in vitro has also become more and more popular. However, applying in-silico simulations for facilitated optimization of advanced drug delivery systems is not yet common practice. One of the reasons is the gap between in vitro and in vivo (PK/PD) simulations. In the future it can be expected that this gap will be closed and that computer assisted device design will play a central role in the research on, and development of advanced drug delivery systems.

  4. Multi-pass Monte Carlo simulation method in nuclear transmutations.

    PubMed

    Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M

    2016-12-01

    Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10(25) or 10(26) members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10(25). Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10(28) steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors.

  5. A Flight Dynamic Simulation Program in Air-Path Axes Using ACSL (Advanced Continuous Simulation Language).

    DTIC Science & Technology

    1986-06-01

    NO-A±?3 649 A FLIGHT DYNANIC SINULRTION PROGRAM IN AIR-PRTH AXES 11𔃼 USING ACSL (ADVANCED.. (U) AERONAUTICAL RESEARCH LABS MELBOURNE (AUSTRALIA) P W...Aeronajutical Restvarch Laboratrmes, ....,. i P.O. Box 4331,M lo re Vic:toria. 3001, Aus trali ."-" Melbourne.-a ’ 𔃾’ -- .-,, : _" • , (C) CMMONWALTH F...of time dependent results . e Tne DERIVATIVE section contains tne aitnd1- of the six degrees look- of freedom flight model. Tr imm inrg o f tnte a ir

  6. Patient Simulation to Demonstrate Students’ Competency in Core Domain Abilities Prior to Beginning Advanced Pharmacy Practice Experiences

    PubMed Central

    Bhutada, Nilesh S.; Feng, Xiaodong

    2012-01-01

    Objective. To implement a simulation-based introductory pharmacy practice experience (IPPE) and determine its effectiveness in assessing pharmacy students’ core domain abilities prior to beginning advanced pharmacy practice experience (APPE). Design. A 60-hour IPPE that used simulation-based techniques to provide clinical experiences was implemented. Twenty-eight students were enrolled in this simulation IPPE, while 60 were enrolled in hospital and specialty IPPEs within the region. Assessment. The IPPE assessed 10 out of 11 of the pre-APPE core domain abilities, and on the practical examination, 67% of students passed compared to 52% of students in the control group. Students performed better on all 6 knowledge quizzes after completing the simulation IPPE. Based on scores on the Perception of Preparedness to Perform (PREP) survey, students felt more prepared regarding “technical” aspects after completing the simulation experience (p<0.001). Ninety-six percent of the respondents agreed with the statement “I am more aware of medication errors after this IPPE.” Conclusion. Simulation is an effective method for assessing the pre-APPE abilities of pharmacy students, preparing them for real clinical encounters, and for making them more aware of medication errors and other patient safety issues. PMID:23193340

  7. Kinetic Method for Hydrogen-Deuterium-Tritium Mixture Distillation Simulation

    SciTech Connect

    Sazonov, A.B.; Kagramanov, Z.G.; Magomedbekov, E.P.

    2005-07-15

    Simulation of hydrogen distillation plants requires mathematical procedures suitable for multicomponent systems. In most of the present-day simulation methods a distillation column is assumed to be composed of theoretical stages, or plates. However, in the case of a multicomponent mixture theoretical plate does not exist.An alternative kinetic method of simulation is depicted in the work. According to this method a system of mass-transfer differential equations is solved numerically. Mass-transfer coefficients are estimated with using experimental results and empirical equations.Developed method allows calculating the steady state of a distillation column as well as its any non-steady state when initial conditions are given. The results for steady states are compared with ones obtained via Thiele-Geddes theoretical stage technique and the necessity of using kinetic method is demonstrated. Examples of a column startup period and periodic distillation simulations are shown as well.

  8. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  9. Recent advances in large-eddy simulation of spray and coal combustion

    NASA Astrophysics Data System (ADS)

    Zhou, L. X.

    2013-07-01

    Large-eddy simulation (LES) is under its rapid development and is recognized as a possible second generation of CFD methods used in engineering. Spray and coal combustion is widely used in power, transportation, chemical and metallurgical, iron and steel making, aeronautical and astronautical engineering, hence LES of spray and coal two-phase combustion is particularly important for engineering application. LES of two-phase combustion attracts more and more attention; since it can give the detailed instantaneous flow and flame structures and more exact statistical results than those given by the Reynolds averaged modeling (RANS modeling). One of the key problems in LES is to develop sub-grid scale (SGS) models, including SGS stress models and combustion models. Different investigators proposed or adopted various SGS models. In this paper the present author attempts to review the advances in studies on LES of spray and coal combustion, including the studies done by the present author and his colleagues. Different SGS models adopted by different investigators are described, some of their main results are summarized, and finally some research needs are discussed.

  10. Advanced discretizations and multigrid methods for liquid crystal configurations

    NASA Astrophysics Data System (ADS)

    Emerson, David B.

    Liquid crystals are substances that possess mesophases with properties intermediate between liquids and crystals. Here, we consider nematic liquid crystals, which consist of rod-like molecules whose average pointwise orientation is represented by a unit-length vector, n( x, y, z) = (n1, n 2, n3)T. In addition to their self-structuring properties, nematics are dielectrically active and birefringent. These traits continue to lead to many important applications and discoveries. Numerical simulations of liquid crystal configurations are used to suggest the presence of new physical phenomena, analyze experiments, and optimize devices. This thesis develops a constrained energy-minimization finite-element method for the efficient computation of nematic liquid crystal equilibrium configurations based on a Lagrange multiplier formulation and the Frank-Oseen free-elastic energy model. First-order optimality conditions are derived and linearized via a Newton approach, yielding a linear system of equations. Due to the nonlinear unit-length constraint, novel well-posedness theory for the variational systems, as well as error analysis, is conducted. The approach is shown to constitute a convergent and well-posed approach, absent typical simplifying assumptions. Moreover, the energy-minimization method and well-posedness theory developed for the free-elastic case are extended to include the effects of applied electric fields and flexoelectricity. In the computational algorithm, nested iteration is applied and proves highly effective at reducing computational costs. Additionally, an alternative technique is studied, where the unit-length constraint is imposed by a penalty method. The performance of the penalty and Lagrange multiplier methods is compared. Furthermore, tailored trust-region strategies are introduced to improve robustness and efficiency. While both approaches yield effective algorithms, the Lagrange multiplier method demonstrates superior accuracy per unit cost. In

  11. Estimating School Efficiency: A Comparison of Methods Using Simulated Data.

    ERIC Educational Resources Information Center

    Bifulco, Robert; Bretschneider, Stuart

    2001-01-01

    Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…

  12. Advances in POST2 End-to-End Descent and Landing Simulation for the ALHAT Project

    NASA Technical Reports Server (NTRS)

    Davis, Jody L.; Striepe, Scott A.; Maddock, Robert W.; Hines, Glenn D.; Paschall, Stephen, II; Cohanim, Babak E.; Fill, Thomas; Johnson, Michael C.; Bishop, Robert H.; DeMars, Kyle J.; Sostaric, Ronald r.; Johnson, Andrew E.

    2008-01-01

    Program to Optimize Simulated Trajectories II (POST2) is used as a basis for an end-to-end descent and landing trajectory simulation that is essential in determining design and integration capability and system performance of the lunar descent and landing system and environment models for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. The POST2 simulation provides a six degree-of-freedom capability necessary to test, design and operate a descent and landing system for successful lunar landing. This paper presents advances in the development and model-implementation of the POST2 simulation, as well as preliminary system performance analysis, used for the testing and evaluation of ALHAT project system models.

  13. Advanced Initiatives in Medical Simulation, 3rd Annual Conference to Create Awareness of Medical Simulation

    DTIC Science & Technology

    2006-06-30

    expertise in psychomotor skills . That understanding makes it possible to predict which measures to distinguish among levels of expertise. With a...students have “virtual mentors” that tell them whenever they make an error. Most simulators focus on psychomotor skills , but they need to also assess and...features at which the student is looking to assess the student’s judgment. Hand motions can be monitored to quantify psychomotor skills during the

  14. Improvements in the application and reporting of advanced Bland-Altman methods of comparison.

    PubMed

    Olofsen, Erik; Dahan, Albert; Borsboom, Gerard; Drummond, Gordon

    2015-02-01

    Bland and Altman have developed a measure called "limits of agreement" to assess correspondence of two methods of clinical measurement. In many circumstances, comparisons are made using several paired measurements in each individual subject. If such measurements are considered as statistically independent pairs, rather than as sets of measurements from separate individuals, limits of agreement will be too narrow. In addition, the confidence intervals for these limits will also be too narrow. Suitable software to compute valid limits of agreement and their confidence intervals is not readily available. Therefore, we set out to provide a freely available implementation accompanied by a formal description of the more advanced Bland-Altman comparison methods. We validate the implementation using simulated data, and demonstrate the effects caused by failing to take the presence of multiple paired measurements per individual properly into account. We propose a standard format of reporting that would improve analysis and interpretation of comparison studies.

  15. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.; Schifer, Nicholas A.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including testing validation hardware, known as the Thermal Standard, to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. This validation hardware provided a comparison for scrutinizing and improving empirical correlations and numerical models of ASC-E2 net heat input. This hardware simulated the characteristics of an ASC-E2 convertor in both an operating and non-operating mode. This paper describes the Thermal Standard testing and the conclusions of the validation effort applied to the empirical correlation methods used by the Radioisotope Power System (RPS) team at NASA Glenn.

  16. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  17. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H., III; Gilinsky, Mikhail M.

    2004-01-01

    In this project on the first stage (2000-Ol), we continued to develop the previous joint research between the Fluid Mechanics and Acoustics Laboratory (FM&AL) at Hampton University (HU) and the Jet Noise Team (JNT) at the NASA Langley Research Center (NASA LaRC). At the second stage (2001-03), FM&AL team concentrated its efforts on solving of problems of interest to Glenn Research Center (NASA GRC), especially in the field of propulsion system enhancement. The NASA GRC R&D Directorate and LaRC Hyper-X Program specialists in a hypersonic technology jointly with the FM&AL staff conducted research on a wide region of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines. The last year the Hampton University School of Engineering & Technology was awarded the NASA grant, for creation of the Aeropropulsion Center, and the FM&AL is a key team of the project fulfillment responsible for research in Aeropropulsion and Acoustics (Pillar I). This work is supported by joint research between the NASA GRC/ FM&AL and the Institute of Mechanics at Moscow State University (IMMSU) in Russia under a CRDF grant. The main areas of current scientific interest of the FM&AL include an investigation of the proposed and patented advanced methods for aircraft engine thrust and noise benefits. This is the main subject of our other projects, of which one is presented. The last year we concentrated our efforts to analyze three main problems: (a) new effective methods fuel injection into the flow stream in air-breathing engines; (b) new re-circulation method for mixing, heat transfer and combustion enhancement in propulsion systems and domestic industry application; (c) covexity flow The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines (see, for

  18. Bioinformatics Methods and Tools to Advance Clinical Care

    PubMed Central

    Lecroq, T.

    2015-01-01

    Summary Objectives To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain and clinical care. Method We provide a synopsis of the articles selected for the IMIA Yearbook 2015, from which we attempt to derive a synthetic overview of current and future activities in the field. As last year, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor has evaluated separately the set of 1,594 articles and the evaluation results were merged for retaining 15 articles for peer-review. Results The selection and evaluation process of this Yearbook’s section on Bioinformatics and Translational Informatics yielded four excellent articles regarding data management and genome medicine that are mainly tool-based papers. In the first article, the authors present PPISURV a tool for uncovering the role of specific genes in cancer survival outcome. The second article describes the classifier PredictSNP which combines six performing tools for predicting disease-related mutations. In the third article, by presenting a high-coverage map of the human proteome using high resolution mass spectrometry, the authors highlight the need for using mass spectrometry to complement genome annotation. The fourth article is also related to patient survival and decision support. The authors present datamining methods of large-scale datasets of past transplants. The objective is to identify chances of survival. Conclusions The current research activities still attest the continuous convergence of Bioinformatics and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care. Indeed, there is a need for powerful tools for managing and interpreting complex, large-scale genomic and biological datasets, but also a need for user-friendly tools developed for the clinicians in their

  19. Advances in Experimental Neuropathology: New Methods and Insights.

    PubMed

    Roth, Kevin A

    2016-03-01

    This Editorial introduces this month's special Neuropathology Theme Issue, a series of Reviews on advances in our understanding of rare human hereditary neuropathies, peripheral nervous system tumors, and common degenerative diseases.

  20. CAPE-OPEN Integration for Advanced Process Engineering Co-Simulation

    SciTech Connect

    Zitney, S.E.

    2006-11-01

    This paper highlights the use of the CAPE-OPEN (CO) standard interfaces in the Advanced Process Engineering Co-Simulator (APECS) developed at the National Energy Technology Laboratory (NETL). The APECS system uses the CO unit operation, thermodynamic, and reaction interfaces to provide its plug-and-play co-simulation capabilities, including the integration of process simulation with computational fluid dynamics (CFD) simulation. APECS also relies heavily on the use of a CO COM/CORBA bridge for running process/CFD co-simulations on multiple operating systems. For process optimization in the face of multiple and some time conflicting objectives, APECS offers stochastic modeling and multi-objective optimization capabilities developed to comply with the CO software standard. At NETL, system analysts are applying APECS to a wide variety of advanced power generation systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based FutureGen power and hydrogen production plant.

  1. Simulation Study of Injection Performance for the Advanced Photon Source Upgrade

    SciTech Connect

    Xiao, A.; Sajaev, V.

    2015-01-01

    A vertical on-axis injection scheme has been proposed for the hybrid seven-bend-achromat (H7BA) [1] Advanced Photon Source upgrade (APSU) lattice. In order to evaluate the injection performance, various errors, such as injection beam jitter, optical mismatch and errors, and injection element errors have been investigated and their significance has been discovered. Injection efficiency is then simulated under different error levels. Based on these simulation results, specifications and an error-budget for individual systems have been defined.

  2. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    SciTech Connect

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  3. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    SciTech Connect

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  4. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    SciTech Connect

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  5. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  6. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  7. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  8. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    SciTech Connect

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  9. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  10. Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David

    1995-01-01

    Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.

  11. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  12. Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Johnson, Arthur R.

    2003-01-01

    Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.

  13. Stress trajectory and advanced hydraulic-fracture simulations for the Eastern Gas Shales Project. Final report, April 30, 1981-July 30, 1983

    SciTech Connect

    Advani, S.H.; Lee, J.K.

    1983-01-01

    A summary review of hydraulic fracture modeling is given. Advanced hydraulic fracture model formulations and simulation, using the finite element method, are presented. The numerical examples include the determination of fracture width, height, length, and stress intensity factors with the effects of frac fluid properties, layered strata, in situ stresses, and joints. Future model extensions are also recommended. 66 references, 23 figures.

  14. Advanced Distributed Simulation Technology II Global Positioning System Interactive Simulation (GPS DIS) Experiment

    DTIC Science & Technology

    2007-11-02

    RWA Manned Simulators 11 3.2.6 Voice Radio Communications: SRE & ASTi 11 3.2.7 ModSAF Operations 11 3.2.8 Data Logger 12 3.2.9 Time Stamper 12...utilized were the Single Channel Ground and Airborne Radio System (SINCGARS) Radio Emulator (SRE), the ASTi Radio, and the Tactical Internet Model (TIM...SGIs at the MWTB and ASTi radios at Ft. Rucker. These two Approved for public release; distribution is unlimited 4 ADST-II-CDRL-GPSDIS-9800018A

  15. Advanced view factor analysis method for radiation exchange

    NASA Astrophysics Data System (ADS)

    Park, Sookuk; Tuller, Stanton E.

    2014-03-01

    A raster-based method for determining complex view factor patterns is presented (HURES model). The model uses Johnson and Watson's view factor analysis method for fisheye lens photographs. The entire sphere is divided into 13 different view factors: open sky; sunny and shaded building walls, vegetation (trees) and ground surfaces above and below 1.2 m from the ground surface. The HURES model gave reasonable view factor results in tests at two urban study sites on summer days: downtown Nanaimo, B.C., Canada and Changwon, Republic of Korea. HURES gave better estimates of open sky view factors determined from fisheye lens photographs than did ENVI-met 3.1 and RayMan Pro. However, all three models underestimated sky view factor. For view factor analysis in outdoor urban areas, the 10° interval of rotation angle at 100 m distance of annuli will be suitable settings for three-dimensional computer simulations. The HURES model can be used for the rapid determination of complex view factor patterns which facilitates the analysis of their effects. Examples of how differing view factor patterns can affect human thermal sensation indices are given. The greater proportion of sunny view factors increased the computed predicted mean vote (PMV) by 1.3 on the sunny side of the street compared with the shady side during mid-morning in downtown Nanaimo. In another example, effects of differing amounts of open sky, sunny ground, sunny buildings and vegetation combined to produce only slight differences in PMV and two other human thermal sensation indices, PET and UTCI.

  16. Processing of alnico permanent magnets by advanced directional solidification methods

    SciTech Connect

    Zou, Min; Johnson, Francis; Zhang, Wanming; Zhao, Qi; Rutkowski, Stephen F.; Zhou, Lin; Kramer, Matthew J.

    2016-07-05

    Advanced directional solidification methods have been used to produce large (>15 cm length) castings of Alnico permanent magnets with highly oriented columnar microstructures. In combination with subsequent thermomagnetic and draw thermal treatment, this method was used to enable the high coercivity, high-Titanium Alnico composition of 39% Co, 29.5% Fe, 14% Ni, 7.5% Ti, 7% Al, 3% Cu (wt%) to have an intrinsic coercivity (Hci) of 2.0 kOe, a remanence (Br) of 10.2 kG, and an energy product (BH)max of 10.9 MGOe. These properties compare favorably to typical properties for the commercial Alnico 9. Directional solidification of higher Ti compositions yielded anisotropic columnar grained microstructures if high heat extraction rates through the mold surface of at least 200 kW/m2 were attained. This was achieved through the use of a thin walled (5 mm thick) high thermal conductivity SiC shell mold extracted from a molten Sn bath at a withdrawal rate of at least 200 mm/h. However, higher Ti compositions did not result in further increases in magnet performance. Images of the microstructures collected by scanning electron microscopy (SEM) reveal a majority α phase with inclusions of secondary αγ phase. Transmission electron microscopy (TEM) reveals that the α phase has a spinodally decomposed microstructure of FeCo-rich needles in a NiAl-rich matrix. In the 7.5% Ti composition the diameter distribution of the FeCo needles was bimodal with the majority having diameters of approximately 50 nm with a small fraction having diameters of approximately 10 nm. The needles formed a mosaic pattern and were elongated along one <001> crystal direction (parallel to the field used during magnetic annealing). Cu precipitates were observed between the needles. Regions of abnormal spinodal morphology appeared to correlate with secondary phase precipitates. The presence of these abnormalities did not prevent the material from displaying

  17. Processing of alnico permanent magnets by advanced directional solidification methods

    DOE PAGES

    Zou, Min; Johnson, Francis; Zhang, Wanming; ...

    2016-07-05

    Advanced directional solidification methods have been used to produce large (>15 cm length) castings of Alnico permanent magnets with highly oriented columnar microstructures. In combination with subsequent thermomagnetic and draw thermal treatment, this method was used to enable the high coercivity, high-Titanium Alnico composition of 39% Co, 29.5% Fe, 14% Ni, 7.5% Ti, 7% Al, 3% Cu (wt%) to have an intrinsic coercivity (Hci) of 2.0 kOe, a remanence (Br) of 10.2 kG, and an energy product (BH)max of 10.9 MGOe. These properties compare favorably to typical properties for the commercial Alnico 9. Directional solidification of higher Ti compositions yieldedmore » anisotropic columnar grained microstructures if high heat extraction rates through the mold surface of at least 200 kW/m2 were attained. This was achieved through the use of a thin walled (5 mm thick) high thermal conductivity SiC shell mold extracted from a molten Sn bath at a withdrawal rate of at least 200 mm/h. However, higher Ti compositions did not result in further increases in magnet performance. Images of the microstructures collected by scanning electron microscopy (SEM) reveal a majority α phase with inclusions of secondary αγ phase. Transmission electron microscopy (TEM) reveals that the α phase has a spinodally decomposed microstructure of FeCo-rich needles in a NiAl-rich matrix. In the 7.5% Ti composition the diameter distribution of the FeCo needles was bimodal with the majority having diameters of approximately 50 nm with a small fraction having diameters of approximately 10 nm. The needles formed a mosaic pattern and were elongated along one <001> crystal direction (parallel to the field used during magnetic annealing). Cu precipitates were observed between the needles. Regions of abnormal spinodal morphology appeared to correlate with secondary phase precipitates. The presence of these abnormalities did not prevent the material from displaying superior magnetic properties in the 7.5% Ti

  18. Processing of alnico permanent magnets by advanced directional solidification methods

    NASA Astrophysics Data System (ADS)

    Zou, Min; Johnson, Francis; Zhang, Wanming; Zhao, Qi; Rutkowski, Stephen F.; Zhou, Lin; Kramer, Matthew J.

    2016-12-01

    Advanced directional solidification methods have been used to produce large (>15 cm length) castings of Alnico permanent magnets with highly oriented columnar microstructures. In combination with subsequent thermomagnetic and draw thermal treatment, this method was used to enable the high coercivity, high-Titanium Alnico composition of 39% Co, 29.5% Fe, 14% Ni, 7.5% Ti, 7% Al, 3% Cu (wt%) to have an intrinsic coercivity (Hci) of 2.0 kOe, a remanence (Br) of 10.2 kG, and an energy product (BH)max of 10.9 MGOe. These properties compare favorably to typical properties for the commercial Alnico 9. Directional solidification of higher Ti compositions yielded anisotropic columnar grained microstructures if high heat extraction rates through the mold surface of at least 200 kW/m2 were attained. This was achieved through the use of a thin walled (5 mm thick) high thermal conductivity SiC shell mold extracted from a molten Sn bath at a withdrawal rate of at least 200 mm/h. However, higher Ti compositions did not result in further increases in magnet performance. Images of the microstructures collected by scanning electron microscopy (SEM) reveal a majority α phase with inclusions of secondary αγ phase. Transmission electron microscopy (TEM) reveals that the α phase has a spinodally decomposed microstructure of FeCo-rich needles in a NiAl-rich matrix. In the 7.5% Ti composition the diameter distribution of the FeCo needles was bimodal with the majority having diameters of approximately 50 nm with a small fraction having diameters of approximately 10 nm. The needles formed a mosaic pattern and were elongated along one <001> crystal direction (parallel to the field used during magnetic annealing). Cu precipitates were observed between the needles. Regions of abnormal spinodal morphology appeared to correlate with secondary phase precipitates. The presence of these abnormalities did not prevent the material from displaying superior magnetic properties in the 7.5% Ti

  19. A high-order immersed boundary method for high-fidelity turbulent combustion simulations

    NASA Astrophysics Data System (ADS)

    Minamoto, Yuki; Aoki, Kozo; Osawa, Kosuke; Shi, Tuo; Prodan, Alexandru; Tanahashi, Mamoru

    2016-11-01

    Direct numerical simulations (DNS) have played important roles in the research of turbulent combustion. With the recent advancement in high-performance computing, DNS of slightly complicated configurations such as V-, various jet and swirl flames have been performed, and such DNS will further our understanding on the physics of turbulent combustion. Since these configurations include walls that do not necessarily conform with the preferred mesh coordinates for combustion DNS, most of these simulations use presumed profiles for inflow/near-wall flows as boundary conditions. A high-order immersed boundary method suited for parallel computation is one way to improve these simulations. The present research implements such a boundary technique in a combustion DNS code, and simulations are performed to confirm its accuracy and performance. This work was partly supported by Council for Science, Technology and Innovation, Cross-ministerial Strategic Innovation Promotion Program (SIP), "Innovative Combustion Technology" (Funding agency: JST).

  20. A tool for simulating parallel branch-and-bound methods

    NASA Astrophysics Data System (ADS)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  1. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    SciTech Connect

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-29

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  2. Comparing Intravenous Insertion Instructional Methods with Haptic Simulators

    PubMed Central

    Malecha, Ann

    2017-01-01

    Objective. The objective of this review was to compare traditional intravenous (IV) insertion instructional methods with the use of haptic IV simulators. Design. An integrative research design was used to analyze the current literature. Data Sources. A search was conducted using key words intravenous (IV) insertion or cannulation or venipuncture and simulation from 2000 to 2015 in the English language. The databases included Academic Search Complete, CINAHL Complete, Education Resource Information Center, and Medline. Review Methods. Whittemore and Knafl's (2005) strategies were used to critique the articles for themes and similarities. Results. Comparisons of outcomes between traditional IV instructional methods and the use of haptic IV simulators continue to show various results. Positive results indicate that the use of the haptic IV simulator decreases both band constriction and total procedure time. While students are satisfied with practicing on the haptic simulators, they still desire faculty involvement. Conclusion. Combining the haptic IV simulator with practical experience on the IV arm may be the best practice for learning IV insertion. Research employing active learning strategies while using a haptic IV simulator during the learning process may reduce cost and faculty time. PMID:28250987

  3. Comparing Intravenous Insertion Instructional Methods with Haptic Simulators.

    PubMed

    McWilliams, Lenora A; Malecha, Ann

    2017-01-01

    Objective. The objective of this review was to compare traditional intravenous (IV) insertion instructional methods with the use of haptic IV simulators. Design. An integrative research design was used to analyze the current literature. Data Sources. A search was conducted using key words intravenous (IV) insertion or cannulation or venipuncture and simulation from 2000 to 2015 in the English language. The databases included Academic Search Complete, CINAHL Complete, Education Resource Information Center, and Medline. Review Methods. Whittemore and Knafl's (2005) strategies were used to critique the articles for themes and similarities. Results. Comparisons of outcomes between traditional IV instructional methods and the use of haptic IV simulators continue to show various results. Positive results indicate that the use of the haptic IV simulator decreases both band constriction and total procedure time. While students are satisfied with practicing on the haptic simulators, they still desire faculty involvement. Conclusion. Combining the haptic IV simulator with practical experience on the IV arm may be the best practice for learning IV insertion. Research employing active learning strategies while using a haptic IV simulator during the learning process may reduce cost and faculty time.

  4. Impact of an Advanced Cardiac Life Support Simulation Laboratory Experience on Pharmacy Student Confidence and Knowledge

    PubMed Central

    Mohorn, Phillip L.; Haney, Jason S.; Phillips, Cynthia M.; Lu, Z. Kevin; Clark, Kimberly; Corboy, Alex; Ragucci, Kelly R.

    2016-01-01

    Objective. To assess the impact of an advanced cardiac life support (ACLS) simulation on pharmacy student confidence and knowledge. Design. Third-year pharmacy students participated in a simulation experience that consisted of team roles training, high-fidelity ACLS simulations, and debriefing. Students completed a pre/postsimulation confidence and knowledge assessment. Assessment. Overall, student knowledge assessment scores and student confidence scores improved significantly. Student confidence and knowledge changes from baseline were not significantly correlated. Conversely, a significant, weak positive correlation between presimulation studying and both presimulation confidence and presimulation knowledge was discovered. Conclusions. Overall, student confidence and knowledge assessment scores in ACLS significantly improved from baseline; however, student confidence and knowledge were not significantly correlated. PMID:27899836

  5. Technical Basis for Physical Fidelity of NRC Control Room Training Simulators for Advanced Reactors

    SciTech Connect

    Minsk, Brian S.; Branch, Kristi M.; Bates, Edward K.; Mitchell, Mark R.; Gore, Bryan F.; Faris, Drury K.

    2009-10-09

    The objective of this study is to determine how simulator physical fidelity influences the effectiveness of training the regulatory personnel responsible for examination and oversight of operating personnel and inspection of technical systems at nuclear power reactors. It seeks to contribute to the U.S. Nuclear Regulatory Commission’s (NRC’s) understanding of the physical fidelity requirements of training simulators. The goal of the study is to provide an analytic framework, data, and analyses that inform NRC decisions about the physical fidelity requirements of the simulators it will need to train its staff for assignment at advanced reactors. These staff are expected to come from increasingly diverse educational and experiential backgrounds.

  6. Impact of an Advanced Cardiac Life Support Simulation Laboratory Experience on Pharmacy Student Confidence and Knowledge.

    PubMed

    Maxwell, Whitney D; Mohorn, Phillip L; Haney, Jason S; Phillips, Cynthia M; Lu, Z Kevin; Clark, Kimberly; Corboy, Alex; Ragucci, Kelly R

    2016-10-25

    Objective. To assess the impact of an advanced cardiac life support (ACLS) simulation on pharmacy student confidence and knowledge. Design. Third-year pharmacy students participated in a simulation experience that consisted of team roles training, high-fidelity ACLS simulations, and debriefing. Students completed a pre/postsimulation confidence and knowledge assessment. Assessment. Overall, student knowledge assessment scores and student confidence scores improved significantly. Student confidence and knowledge changes from baseline were not significantly correlated. Conversely, a significant, weak positive correlation between presimulation studying and both presimulation confidence and presimulation knowledge was discovered. Conclusions. Overall, student confidence and knowledge assessment scores in ACLS significantly improved from baseline; however, student confidence and knowledge were not significantly correlated.

  7. Do Advance Yield Markings Increase Safe Driver Behaviors at Unsignalized, Marked Midblock Crosswalks? Driving Simulator Study

    PubMed Central

    Gómez, Radhameris A.; Samuel, Siby; Gerardino, Luis Roman; Romoser, Matthew R. E.; Collura, John; Knodler, Michael; Fisher, Donald L.

    2012-01-01

    In the United States, 78% of pedestrian crashes occur at noninter-section crossings. As a result, unsignalized, marked midblock crosswalks are prime targets for remediation. Many of these crashes occur under sight-limited conditions in which the view of critical information by the driver or pedestrian is obstructed by a vehicle stopped in an adjacent travel or parking lane on the near side of the crosswalk. Study of such a situation on the open road is much too risky, but study of the situation in a driving simulator is not. This paper describes the development of scenarios with sight limitations to compare potential vehicle–pedestrian conflicts on a driving simulator under conditions with two different types of pavement markings. Under the first condition, advance yield markings and symbol signs (prompts) that indicated “yield here to pedestrians” were used to warn drivers of pedestrians at marked, midblock crosswalks. Under the second condition, standard crosswalk treatments and prompts were used to warn drivers of these hazards. Actual crashes as well as the drivers' point of gaze were measured to determine if the drivers approaching a marked midblock crosswalk looked for pedestrians in the crosswalk more frequently and sooner in high-risk scenarios when advance yield markings and prompts were present than when standard markings and prompts were used. Fewer crashes were found to occur with advance yield markings. Drivers were also found to look for pedestrians much more frequently and much sooner with advance yield markings. The advantages and limitations of the use of driving simulation to study problems such as these are discussed. PMID:23082040

  8. Do Advance Yield Markings Increase Safe Driver Behaviors at Unsignalized, Marked Midblock Crosswalks? Driving Simulator Study.

    PubMed

    Gómez, Radhameris A; Samuel, Siby; Gerardino, Luis Roman; Romoser, Matthew R E; Collura, John; Knodler, Michael; Fisher, Donald L

    2011-01-01

    In the United States, 78% of pedestrian crashes occur at noninter-section crossings. As a result, unsignalized, marked midblock crosswalks are prime targets for remediation. Many of these crashes occur under sight-limited conditions in which the view of critical information by the driver or pedestrian is obstructed by a vehicle stopped in an adjacent travel or parking lane on the near side of the crosswalk. Study of such a situation on the open road is much too risky, but study of the situation in a driving simulator is not. This paper describes the development of scenarios with sight limitations to compare potential vehicle-pedestrian conflicts on a driving simulator under conditions with two different types of pavement markings. Under the first condition, advance yield markings and symbol signs (prompts) that indicated "yield here to pedestrians" were used to warn drivers of pedestrians at marked, midblock crosswalks. Under the second condition, standard crosswalk treatments and prompts were used to warn drivers of these hazards. Actual crashes as well as the drivers' point of gaze were measured to determine if the drivers approaching a marked midblock crosswalk looked for pedestrians in the crosswalk more frequently and sooner in high-risk scenarios when advance yield markings and prompts were present than when standard markings and prompts were used. Fewer crashes were found to occur with advance yield markings. Drivers were also found to look for pedestrians much more frequently and much sooner with advance yield markings. The advantages and limitations of the use of driving simulation to study problems such as these are discussed.

  9. Retention of Advanced Cardiac Life Support Knowledge and Skills Following High-Fidelity Mannequin Simulation Training

    PubMed Central

    Sen, Sanchita; Finn, Laura A.; Cawley, Michael J.

    2015-01-01

    Objective. To assess pharmacy students’ ability to retain advanced cardiac life support (ACLS) knowledge and skills within 120 days of previous high-fidelity mannequin simulation training. Design. Students were randomly assigned to rapid response teams of 5-6. Skills in ACLS and mannequin survival were compared between teams some members of which had simulation training 120 days earlier and teams who had not had previous training. Assessment. A checklist was used to record and assess performance in the simulations. Teams with previous simulation training (n=10) demonstrated numerical superiority to teams without previous training (n=12) for 6 out of 8 (75%) ACLS skills observed, including time calculating accurate vasopressor infusion rate (83 sec vs 113 sec; p=0.01). Mannequin survival was 37% higher for teams who had previous simulation training, but this result was not significant (70% vs 33%; p=0.20). Conclusion. Teams with students who had previous simulation training demonstrated numerical superiority in ACLS knowledge and skill retention within 120 days of previous training compared to those who had no previous training. Future studies are needed to add to the current evidence of pharmacy students’ and practicing pharmacists’ ACLS knowledge and skill retention. PMID:25741028

  10. Mission simulation as an approach to develop requirements for automation in Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Eckelkamp, R. E.; Barta, D. J.; Dragg, J.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    This paper examines mission simulation as an approach to develop requirements for automation and robotics for Advanced Life Support Systems (ALSS). The focus is on requirements and applications for command and control, control and monitoring, situation assessment and response, diagnosis and recovery, adaptive planning and scheduling, and other automation applications in addition to mechanized equipment and robotics applications to reduce the excessive human labor requirements to operate and maintain an ALSS. Based on principles of systems engineering, an approach is proposed to assess requirements for automation and robotics using mission simulation tools. First, the story of a simulated mission is defined in terms of processes with attendant types of resources needed, including options for use of automation and robotic systems. Next, systems dynamics models are used in simulation to reveal the implications for selected resource allocation schemes in terms of resources required to complete operational tasks. The simulations not only help establish ALSS design criteria, but also may offer guidance to ALSS research efforts by identifying gaps in knowledge about procedures and/or biophysical processes. Simulations of a planned one-year mission with 4 crewmembers in a Human Rated Test Facility are presented as an approach to evaluation of mission feasibility and definition of automation and robotics requirements.

  11. A neural-network-based method of model reduction for the dynamic simulation of MEMS

    NASA Astrophysics Data System (ADS)

    Liang, Y. C.; Lin, W. Z.; Lee, H. P.; Lim, S. P.; Lee, K. H.; Feng, D. P.

    2001-05-01

    This paper proposes a neuro-network-based method for model reduction that combines the generalized Hebbian algorithm (GHA) with the Galerkin procedure to perform the dynamic simulation and analysis of nonlinear microelectromechanical systems (MEMS). An unsupervised neural network is adopted to find the principal eigenvectors of a correlation matrix of snapshots. It has been shown that the extensive computer results of the principal component analysis using the neural network of GHA can extract an empirical basis from numerical or experimental data, which can be used to convert the original system into a lumped low-order macromodel. The macromodel can be employed to carry out the dynamic simulation of the original system resulting in a dramatic reduction of computation time while not losing flexibility and accuracy. Compared with other existing model reduction methods for the dynamic simulation of MEMS, the present method does not need to compute the input correlation matrix in advance. It needs only to find very few required basis functions, which can be learned directly from the input data, and this means that the method possesses potential advantages when the measured data are large. The method is evaluated to simulate the pull-in dynamics of a doubly-clamped microbeam subjected to different input voltage spectra of electrostatic actuation. The efficiency and the flexibility of the proposed method are examined by comparing the results with those of the fully meshed finite-difference method.

  12. Advanced virtual energy simulation training and research: IGCC with CO2 capture power plant

    SciTech Connect

    Zitney, S.; Liese, E.; Mahapatra, P.; Bhattacharyya, D.; Provost, G.

    2011-01-01

    In this presentation, we highlight the deployment of a real-time dynamic simulator of an integrated gasification combined cycle (IGCC) power plant with CO{sub 2} capture at the Department of Energy's (DOE) National Energy Technology Laboratory's (NETL) Advanced Virtual Energy Simulation Training and Research (AVESTARTM) Center. The Center was established as part of the DOE's accelerating initiative to advance new clean coal technology for power generation. IGCC systems are an attractive technology option, generating low-cost electricity by converting coal and/or other fuels into a clean synthesis gas mixture in a process that is efficient and environmentally superior to conventional power plants. The IGCC dynamic simulator builds on, and reaches beyond, conventional power plant simulators to merge, for the first time, a 'gasification with CO{sub 2} capture' process simulator with a 'combined-cycle' power simulator. Fueled with coal, petroleum coke, and/or biomass, the gasification island of the simulated IGCC plant consists of two oxygen-blown, downward-fired, entrained-flow, slagging gasifiers with radiant syngas coolers and two-stage sour shift reactors, followed by a dual-stage acid gas removal process for CO{sub 2} capture. The combined cycle island consists of two F-class gas turbines, steam turbine, and a heat recovery steam generator with three-pressure levels. The dynamic simulator can be used for normal base-load operation, as well as plant start-up and shut down. The real-time dynamic simulator also responds satisfactorily to process disturbances, feedstock blending and switchovers, fluctuations in ambient conditions, and power demand load shedding. In addition, the full-scope simulator handles a wide range of abnormal situations, including equipment malfunctions and failures, together with changes initiated through actions from plant field operators. By providing a comprehensive IGCC operator training system, the AVESTAR Center is poised to develop a

  13. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging.

  14. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering

    NASA Technical Reports Server (NTRS)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen

    2006-01-01

    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  15. Improved Collision Modeling for Direct Simulation Monte Carlo Methods

    DTIC Science & Technology

    2011-03-01

    number is a measure of the rarefaction of a gas , and will be explained more thoroughly in the following chap- ter. Continuum solvers that use Navier...Limits on Mathematical Models [4] Kn=0.1, and the flow can be considered rarefied above that value. Direct Simulation Monte Carlo (DSMC) is a stochastic...method which utilizes the Monte Carlo statistical model to simulate gas behavior, which is very useful for these rarefied atmosphere hypersonic

  16. Evaluation methods of a middleware for networked surgical simulations.

    PubMed

    Cai, Qingbo; Liberatore, Vincenzo; Cavuşoğlu, M Cenk; Yoo, Youngjin

    2006-01-01

    Distributed surgical virtual environments are desirable since they substantially extend the accessibility of computational resources by network communication. However, network conditions critically affects the quality of a networked surgical simulation in terms of bandwidth limit, delays, and packet losses, etc. A solution to this problem is to introduce a middleware between the simulation application and the network so that it can take actions to enhance the user-perceived simulation performance. To comprehensively assess the effectiveness of such a middleware, we propose several evaluation methods in this paper, i.e., semi-automatic evaluation, middleware overhead measurement, and usability test.

  17. Profile Evolution Simulation in Etching Systems Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Govindan, T. R.; Meyyappan, M.

    1998-01-01

    Semiconductor device profiles are determined by the characteristics of both etching and deposition processes. In particular, a highly anisotropic etch is required to achieve vertical sidewalls. However, etching is comprised of both anisotropic and isotropic components, due to ion and neutral fluxes, respectively. In Ar/Cl2 plasmas, for example, neutral chlorine reacts with the Si surfaces to form silicon chlorides. These compounds are then removed by the impinging ion fluxes. Hence the directionality of the ions (and thus the ion angular distribution function, or IAD), as well as the relative fluxes of neutrals and ions determines the amount of undercutting. One method of modeling device profile evolution is to simulate the moving solid-gas interface between the semiconductor and the plasma as a string of nodes. The velocity of each node is calculated and then the nodes are advanced accordingly. Although this technique appears to be relatively straightforward, extensive looping schemes are required at the profile corners. An alternate method is to use level set theory, which involves embedding the location of the interface in a field variable. The normal speed is calculated at each mesh point, and the field variable is updated. The profile comers are more accurately modeled as the need for looping algorithms is eliminated. The model we have developed is a 2-D Level Set Profile Evolution Simulation (LSPES). The LSPES calculates etch rates of a substrate in low pressure plasmas due to the incident ion and neutral fluxes. For a Si substrate in an Ar/C12 gas mixture, for example, the predictions of the LSPES are identical to those from a string evolution model for high neutral fluxes and two different ion angular distributions.(2) In the figure shown, the relative neutral to ion flux in the bulk plasma is 100 to 1. For a moderately isotropic ion angular distribution function as shown in the cases in the left hand column, both the LSPES (top row) and rude's string

  18. An experimental method for the assessment of color simulation tools.

    PubMed

    Lillo, Julio; Alvaro, Leticia; Moreira, Humberto

    2014-07-22

    The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method.

  19. Conceptual frameworks and methods for advancing invasion ecology.

    PubMed

    Heger, Tina; Pahl, Anna T; Botta-Dukát, Zoltan; Gherardi, Francesca; Hoppe, Christina; Hoste, Ivan; Jax, Kurt; Lindström, Leena; Boets, Pieter; Haider, Sylvia; Kollmann, Johannes; Wittmann, Meike J; Jeschke, Jonathan M

    2013-09-01

    Invasion ecology has much advanced since its early beginnings. Nevertheless, explanation, prediction, and management of biological invasions remain difficult. We argue that progress in invasion research can be accelerated by, first, pointing out difficulties this field is currently facing and, second, looking for measures to overcome them. We see basic and applied research in invasion ecology confronted with difficulties arising from (A) societal issues, e.g., disparate perceptions of invasive species; (B) the peculiarity of the invasion process, e.g., its complexity and context dependency; and (C) the scientific methodology, e.g., imprecise hypotheses. To overcome these difficulties, we propose three key measures: (1) a checklist for definitions to encourage explicit definitions; (2) implementation of a hierarchy of hypotheses (HoH), where general hypotheses branch into specific and precisely testable hypotheses; and (3) platforms for improved communication. These measures may significantly increase conceptual clarity and enhance communication, thus advancing invasion ecology.

  20. Aquatic ecosystem protection and restoration: Advances in methods for assessment and evaluation

    USGS Publications Warehouse

    Bain, M.B.; Harig, A.L.; Loucks, D.P.; Goforth, R.R.; Mills, K.E.

    2000-01-01

    Many methods and criteria are available to assess aquatic ecosystems, and this review focuses on a set that demonstrates advancements from community analyses to methods spanning large spatial and temporal scales. Basic methods have been extended by incorporating taxa sensitivity to different forms of stress, adding measures linked to system function, synthesizing multiple faunal groups, integrating biological and physical attributes, spanning large spatial scales, and enabling simulations through time. These tools can be customized to meet the needs of a particular assessment and ecosystem. Two case studies are presented to show how new methods were applied at the ecosystem scale for achieving practical management goals. One case used an assessment of biotic structure to demonstrate how enhanced river flows can improve habitat conditions and restore a diverse fish fauna reflective of a healthy riverine ecosystem. In the second case, multitaxonomic integrity indicators were successful in distinguishing lake ecosystems that were disturbed, healthy, and in the process of restoration. Most methods strive to address the concept of biological integrity and assessment effectiveness often can be impeded by the lack of more specific ecosystem management objectives. Scientific and policy explorations are needed to define new ways for designating a healthy system so as to allow specification of precise quality criteria that will promote further development of ecosystem analysis tools.

  1. Advanced materials and methods for next generation spintronics

    NASA Astrophysics Data System (ADS)

    Siegel, Gene Phillip

    The modern age is filled with ever-advancing electronic devices. The contents of this dissertation continue the desire for faster, smaller, better electronics. Specifically, this dissertation addresses a field known as "spintronics", electronic devices based on an electron's spin, not just its charge. The field of spintronics originated in 1990 when Datta and Das first proposed a "spin transistor" that would function by passing a spin polarized current from a magnetic electrode into a semiconductor channel. The spins in the channel could then be manipulated by applying an electrical voltage across the gate of the device. However, it has since been found that a great amount of scattering occurs at the ferromagnet/semiconductor interface due to the large impedance mismatch that exists between the two materials. Because of this, there were three updated versions of the spintronic transistor that were proposed to improve spin injection: one that used a ferromagnetic semiconductor electrode, one that added a tunnel barrier between the ferromagnet and semiconductor, and one that utilized a ferromagnetic tunnel barrier which would act like a spin filter. It was next proposed that it may be possible to achieve a "pure spin current", or a spin current with no concurrent electric current (i.e., no net flow of electrons). One such method that was discovered is the spin Seebeck effect, which was discovered in 2008 by Uchida et al., in which a thermal gradient in a magnetic material generates a spin current which can be injected into adjacent material as a pure spin current. The first section of this dissertation addresses this spin Seebeck effect (SSE). The goal was to create such a device that both performs better than previously reported devices and is capable of operating without the aid of an external magnetic field. We were successful in this endeavor. The trick to achieving both of these goals was found to be in the roughness of the magnetic layer. A rougher magnetic

  2. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    EPA Science Inventory

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  3. Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions

    ERIC Educational Resources Information Center

    Syed, Mahbubur Rahman, Ed.

    2009-01-01

    The emerging field of advanced distance education delivers academic courses across time and distance, allowing educators and students to participate in a convenient learning method. "Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions" demonstrates communication technologies, intelligent…

  4. A novel load balancing method for hierarchical federation simulation system

    NASA Astrophysics Data System (ADS)

    Bin, Xiao; Xiao, Tian-yuan

    2013-07-01

    In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.

  5. A Method of Simulating Fluid Structure Interactions for Deformable Decelerators

    NASA Astrophysics Data System (ADS)

    Gidzak, Vladimyr Mykhalo

    A method is developed for performing simulations that contain fluid-structure interactions between deployable decelerators and a high speed compressible flow. The problem of coupling together multiple physical systems is examined with discussion of the strength of coupling for various methods. A non-monolithic strongly coupled option is presented for fluid-structure systems based on grid deformation. A class of algebraic grid deformation methods is then presented with examples of increasing complexity. The strength of the fluid-structure coupling is validated against two analytic problems, chosen to test the time dependent behavior of structure on fluid interactions, and of fluid on structure interruptions. A one-dimentional material heating model is also validated against experimental data. Results are provided for simulations of a wind tunnel scale disk-gap-band parachute with comparison to experimental data. Finally, a simulation is performed on a flight scale tension cone decelerator, with examination of time-dependent material stress, and heating.

  6. Advanced Computational Methods for Thermal Radiative Heat Transfer

    SciTech Connect

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

  7. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  8. Development of Computational Approaches for Simulation and Advanced Controls for Hybrid Combustion-Gasification Chemical Looping

    SciTech Connect

    Joshi, Abhinaya; Lou, Xinsheng; Neuschaefer, Carl; Chaudry, Majid; Quinn, Joseph

    2012-07-31

    This document provides the results of the project through September 2009. The Phase I project has recently been extended from September 2009 to March 2011. The project extension will begin work on Chemical Looping (CL) Prototype modeling and advanced control design exploration in preparation for a scale-up phase. The results to date include: successful development of dual loop chemical looping process models and dynamic simulation software tools, development and test of several advanced control concepts and applications for Chemical Looping transport control and investigation of several sensor concepts and establishment of two feasible sensor candidates recommended for further prototype development and controls integration. There are three sections in this summary and conclusions. Section 1 presents the project scope and objectives. Section 2 highlights the detailed accomplishments by project task area. Section 3 provides conclusions to date and recommendations for future work.

  9. Replica exchange simulation method using temperature and solvent viscosity

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuong H.

    2010-04-01

    We propose an efficient and simple method for fast conformational sampling by introducing the solvent viscosity as a parameter to the conventional temperature replica exchange molecular dynamics (T-REMD) simulation method. The method, named V-REMD (V stands for viscosity), uses both low solvent viscosity and high temperature to enhance sampling for each replica; therefore it requires fewer replicas than the T-REMD method. To reduce the solvent viscosity by a factor of λ in a molecular dynamics simulation, one can simply reduce the mass of solvent molecules by a factor of λ2. This makes the method as simple as the conventional method. Moreover, thermodynamic and conformational properties of structures in replicas are still useful as long as one has sufficiently sampled the Boltzmann ensemble. The advantage of the present method has been demonstrated with the simulations of the trialanine, deca-alanine, and a 16-residue β-hairpin peptides. It shows that the method could reduce the number of replicas by a factor of 1.5 to 2 as compared with the T-REMD method.

  10. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  11. Implicit methods for efficient musculoskeletal simulation and optimal control.

    PubMed

    van den Bogert, Antonie J; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers.

  12. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  13. Application of particle method to the casting process simulation

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Zulaida, Y. M.; Anzai, K.

    2012-07-01

    Casting processes involve many significant phenomena such as fluid flow, solidification, and deformation, and it is known that casting defects are strongly influenced by the phenomena. However the phenomena complexly interacts each other and it is difficult to observe them directly because the temperature of the melt and other apparatus components are quite high, and they are generally opaque; therefore, a computer simulation is expected to serve a lot of benefits to consider what happens in the processes. Recently, a particle method, which is one of fully Lagrangian methods, has attracted considerable attention. The particle methods based on Lagrangian methods involving no calculation lattice have been developed rapidly because of their applicability to multi-physics problems. In this study, we combined the fluid flow, heat transfer and solidification simulation programs, and tried to simulate various casting processes such as continuous casting, centrifugal casting and ingot making. As a result of continuous casting simulation, the powder flow could be calculated as well as the melt flow, and the subsequent shape of interface between the melt and the powder was calculated. In the centrifugal casting simulation, the mold was smoothly modeled along the shape of the real mold, and the fluid flow and the rotating mold are simulated directly. As a result, the flow of the melt dragged by the rotating mold was calculated well. The eccentric rotation and the influence of Coriolis force were also reproduced directly and naturally. For ingot making simulation, a shrinkage formation behavior was calculated and the shape of the shrinkage agreed well with the experimental result.

  14. IMPACT OF SIMULANT PRODUCTION METHODS ON SRAT PRODUCT

    SciTech Connect

    EIBLING, R

    2006-03-22

    The research and development programs in support of the Defense Waste Processing Facility (DWPF) and other high level waste vitrification processes require the use of both nonradioactive waste simulants and actual waste samples. The nonradioactive waste simulants have been used for laboratory testing, pilot-scale testing and full-scale integrated facility testing. Recent efforts have focused on matching the physical properties of actual sludge. These waste simulants were designed to reproduce the chemical and, if possible, the physical properties of the actual high level waste. This technical report documents a study of simulant production methods for high level waste simulated sludge and their impact on the physical properties of the resultant SRAT product. The sludge simulants used in support of DWPF have been based on average waste compositions and on expected or actual batch compositions. These sludge simulants were created to primarily match the chemical properties of the actual waste. These sludges were produced by generating manganese dioxide, MnO{sub 2}, from permanganate ion (MnO{sub 4}{sup -}) and manganous nitrate, precipitating ferric nitrate and nickel nitrate with sodium hydroxide, washing with inhibited water and then addition of other waste species. While these simulated sludges provided a good match for chemical reaction studies, they did not adequately match the physical properties (primarily rheology) measured on the actual waste. A study was completed in FY04 to determine the impact of simulant production methods on the physical properties of Sludge Batch 3 simulant. This study produced eight batches of sludge simulant, all prepared to the same chemical target, by varying the sludge production methods. The sludge batch, which most closely duplicated the actual SB3 sludge physical properties, was Test 8. Test 8 sludge was prepared by coprecipitating all of the major metals (including Al). After the sludge was washed to meet the target, the sludge

  15. Training toward Advanced 3D Seismic Methods for CO2 Monitoring, Verification, and Accounting

    SciTech Connect

    Christopher Liner

    2012-05-31

    The objective of our work is graduate and undergraduate student training related to improved 3D seismic technology that addresses key challenges related to monitoring movement and containment of CO{sub 2}, specifically better quantification and sensitivity for mapping of caprock integrity, fractures, and other potential leakage pathways. We utilize data and results developed through previous DOE-funded CO{sub 2} characterization project (DE-FG26-06NT42734) at the Dickman Field of Ness County, KS. Dickman is a type locality for the geology that will be encountered for CO{sub 2} sequestration projects from northern Oklahoma across the U.S. midcontinent to Indiana and Illinois. Since its discovery in 1962, the Dickman Field has produced about 1.7 million barrels of oil from porous Mississippian carbonates with a small structural closure at about 4400 ft drilling depth. Project data includes 3.3 square miles of 3D seismic data, 142 wells, with log, some core, and oil/water production data available. Only two wells penetrate the deep saline aquifer. In a previous DOE-funded project, geological and seismic data were integrated to create a geological property model and a flow simulation grid. We believe that sequestration of CO{sub 2} will largely occur in areas of relatively flat geology and simple near surface, similar to Dickman. The challenge is not complex geology, but development of improved, lower-cost methods for detecting natural fractures and subtle faults. Our project used numerical simulation to test methods of gathering multicomponent, full azimuth data ideal for this purpose. Our specific objectives were to apply advanced seismic methods to aide in quantifying reservoir properties and lateral continuity of CO{sub 2} sequestration targets. The purpose of the current project is graduate and undergraduate student training related to improved 3D seismic technology that addresses key challenges related to monitoring movement and containment of CO{sub 2

  16. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  17. Effects of Hourly, Low-Incentive, and High-Incentive Pay on Simulated Work Productivity: Initial Findings with a New Laboratory Method

    ERIC Educational Resources Information Center

    Oah, Shezeen; Lee, Jang-Han

    2011-01-01

    The failures of previous studies to demonstrate productivity differences across different percentages of incentive pay may be partially due to insufficient simulation fidelity. The present study compared the effects of different percentages of incentive pay using a more advanced simulation method. Three payment methods were tested: hourly,…

  18. Comparison of Different Methods of Grading a Level Turn Task on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Heath, Bruce E.; Crier, tomyka

    2003-01-01

    With the advancements in the computing power of personal computers, pc-based flight simulators and trainers have opened new avenues in the training of airplane pilots. It may be desirable to have the flight simulator make a quantitative evaluation of the progress of a pilot's training thereby reducing the physical requirement of the flight instructor who must, in turn, watch every flight. In an experiment, University students conducted six different flights, each consisting of two level turns. The flights were three minutes in duration. By evaluating videotapes, two certified flight instructors provided separate letter grades for each turn. These level turns were also evaluated using two other computer based grading methods. One method determined automated grades based on prescribed tolerances in bank angle, airspeed and altitude. The other method used was deviations in altitude and bank angle for performance index and performance grades.

  19. The GEANT low energy Compton scattering (GLECS) package for use in simulating advanced Compton telescopes

    NASA Astrophysics Data System (ADS)

    Kippen, R. Marc

    2004-02-01

    Compton γ-ray imaging is inherently based on the assumption of γ-rays scattering with free electrons. In reality, the non-zero momentum of target electrons bound in atoms blurs this ideal scattering response in a process known as Doppler broadening. The design and understanding of advanced Compton telescopes, thus, depends critically on the ability to accurately account for Doppler broadening effects. For this purpose, a Monte Carlo package that simulates detailed Doppler broadening has been developed for use with the powerful, general-purpose GEANT3 and GEANT4 radiation transport codes. This paper describes the design of this package, and illustrates results of comparison with selected experimental data.

  20. On Simulation of Edge Stretchability of an 800MPa Advanced High Strength Steel

    NASA Astrophysics Data System (ADS)

    Pathak, Nikky; Butcher, Cliff; Worswick, Michael

    2016-08-01

    In the present work, the edge stretchability of advanced high strength steel (AHSS) was investigated experimentally and numerically using both a hole expansion test and a tensile specimen with a central hole. The experimental fracture strains obtained using the hole expansion and hole tension test in both reamed and sheared edge conditions were in very good agreement, suggesting the tests are equivalent for fracture characterization. Isotropic finite-element simulations of both tests were performed to compare the stress-state near the hole edge.

  1. Absolute Time Error Calibration of GPS Receivers Using Advanced GPS Simulators

    DTIC Science & Technology

    1997-12-01

    29th Annual Precise Time a d Time Interval (PTTI) Meeting ABSOLUTE TIME ERROR CALIBRATION OF GPS RECEIVERS USING ADVANCED GPS SIMULATORS E.D...DC 20375 USA Abstract Preche time transfer eq)er&nen& using GPS with t h e stabd?v’s under ten nanoseconh are common& being reported willrbr the... time transfer communily. Relarive calibrations are done by naeasurhg the time error of one GPS receiver versus a “known master refmence receiver.” Z?t

  2. Advances in Systems and Technologies Toward Interopoerating Operational Military C2 and Simulation Systems

    DTIC Science & Technology

    2014-06-01

    Standards   Organization   (SISO)   provides   a   collaborative   environment   for   exchange   of   information   about...19th  ICCRTS   “C2  Agility:  Lessons   Learned  from  Research  and  Operations”   Advances  in  Systems  and...Their vision is a future where military organizations can link their C2 and simulation systems without special preparation in support of coalition

  3. Advanced 3D inverse method for designing turbomachine blades

    SciTech Connect

    Dang, T.

    1995-10-01

    To meet the goal of 60% plant-cycle efficiency or better set in the ATS Program for baseload utility scale power generation, several critical technologies need to be developed. One such need is the improvement of component efficiencies. This work addresses the issue of improving the performance of turbo-machine components in gas turbines through the development of an advanced three-dimensional and viscous blade design system. This technology is needed to replace some elements in current design systems that are based on outdated technology.

  4. Advancing digital methods in the fight against communicable diseases.

    PubMed

    Chabot-Couture, Guillaume; Seaman, Vincent Y; Wenger, Jay; Moonen, Bruno; Magill, Alan

    2015-03-01

    Important advances are being made in the fight against communicable diseases by using new digital tools. While they can be a challenge to deploy at-scale, GPS-enabled smartphones, electronic dashboards and computer models have multiple benefits. They can facilitate program operations, lead to new insights about the disease transmission and support strategic planning. Today, tools such as these are used to vaccinate more children against polio in Nigeria, reduce the malaria burden in Zambia and help predict the spread of the Ebola epidemic in West Africa.

  5. Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4

    NASA Astrophysics Data System (ADS)

    Barret, Olivier; Carpenter, T. Adrian; Clark, John C.; Ansorge, Richard E.; Fryer, Tim D.

    2005-10-01

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.

  6. Monte Carlo simulation and scatter correction of the GE advance PET scanner with SimSET and Geant4.

    PubMed

    Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D

    2005-10-21

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.

  7. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  8. Classification methods for noise transients in advanced gravitational-wave detectors II: performance tests on Advanced LIGO data

    NASA Astrophysics Data System (ADS)

    Powell, Jade; Torres-Forné, Alejandro; Lynch, Ryan; Trifirò, Daniele; Cuoco, Elena; Cavaglià, Marco; Heng, Ik Siong; Font, José A.

    2017-02-01

    The data taken by the advanced LIGO and Virgo gravitational-wave detectors contains short duration noise transients that limit the significance of astrophysical detections and reduce the duty cycle of the instruments. As the advanced detectors are reaching sensitivity levels that allow for multiple detections of astrophysical gravitational-wave sources it is crucial to achieve a fast and accurate characterization of non-astrophysical transient noise shortly after it occurs in the detectors. Previously we presented three methods for the classification of transient noise sources. They are Principal Component Analysis for Transients (PCAT), Principal Component LALInference Burst (PC-LIB) and Wavelet Detection Filter with Machine Learning (WDF-ML). In this study we carry out the first performance tests of these algorithms on gravitational-wave data from the Advanced LIGO detectors. We use the data taken between the 3rd of June 2015 and the 14th of June 2015 during the 7th engineering run (ER7), and outline the improvements made to increase the performance and lower the latency of the algorithms on real data. This work provides an important test for understanding the performance of these methods on real, non stationary data in preparation for the second advanced gravitational-wave detector observation run, planned for later this year. We show that all methods can classify transients in non stationary data with a high level of accuracy and show the benefits of using multiple classifiers.

  9. A fully nonlinear characteristic method for gyrokinetic simulation

    SciTech Connect

    Parker, S.E.; Lee, W.W.

    1992-07-01

    We present a new scheme which evolves the perturbed part of the distribution function along a set of characteristics that solves the fully nonlinear gyrokinetic equations. This nonlinear characteristic method for particle simulation is an extension of the partially linear weighting scheme, and may be considered an improvement of existing {delta} f methods. Some of the features of this new method are: the ability to keep all of the nonlinearities, particularly those associated with parallel acceleration; the loading of the physical equilibrium distribution function f{sub o} (e.g., a Maxwellian), with or without the multiple spatial scale approximation; the use of a single of trajectories for the particles; and also, the retention of the conservation properties of the original gyrokinetic system in the numerically converged limit. Therefore, one can take advantage of the low noise property of the weighting scheme together with the quiet start techniques to simulate weak instabilities, with a substantially reduced number of particles than required for a conventional simulation. The new method is used to study a one dimensional drift wave model which isolates the parallel velocity nonlinearity. A mode coupling calculation of the saturation mechanism is given, which is in good agreement with the simulation results and predicts a considerably lower saturation level then the estimate of Sagdeev and Galeev. Finally, we extend the nonlinear characteristic method to the electromagnetic gyrokinetic equations in general geometry.

  10. Orthogonal Metal Cutting Simulation Using Advanced Constitutive Equations with Damage and Fully Adaptive Numerical Procedure

    NASA Astrophysics Data System (ADS)

    Saanouni, Kkemais; Labergère, Carl; Issa, Mazen; Rassineux, Alain

    2010-06-01

    This work proposes a complete adaptive numerical methodology which uses `advanced' elastoplastic constitutive equations coupling: thermal effects, large elasto-viscoplasticity with mixed non linear hardening, ductile damage and contact with friction, for 2D machining simulation. Fully coupled (strong coupling) thermo-elasto-visco-plastic-damage constitutive equations based on the state variables under large plastic deformation developed for metal forming simulation are presented. The relevant numerical aspects concerning the local integration scheme as well as the global resolution strategy and the adaptive remeshing facility are briefly discussed. Applications are made to the orthogonal metal cutting by chip formation and segmentation under high velocity. The interactions between hardening, plasticity, ductile damage and thermal effects and their effects on the adiabatic shear band formation including the formation of cracks are investigated.

  11. Advanced thermal energy management: A thermal test bed and heat pipe simulation

    NASA Technical Reports Server (NTRS)

    Barile, Ronald G.

    1986-01-01

    Work initiated on a common-module thermal test simulation was continued, and a second project on heat pipe simulation was begun. The test bed, constructed from surplus Skylab equipment, was modeled and solved for various thermal load and flow conditions. Low thermal load caused the radiator fluid, Coolanol 25, to thicken due to its temperature avoided by using a regenerator-heat-exchanger. Other possible solutions modeled include a radiator heater and shunting heat from the central thermal bus to the radiator. Also, module air temperature can become excessive with high avionics load. A second preoject concerning advanced heat pipe concepts was initiated. A program was written which calculates fluid physical properties, liquid and vapor pressure in the evaporator and condenser, fluid flow rates, and thermal flux. The program is directed to evaluating newer heat pipe wicks and geometries, especially water in an artery surrounded by six vapor channels. Effects of temperature, groove and slot dimensions, and wick properties are reported.

  12. Ejector nozzle test results at simulated flight conditions for an advanced supersonic transport propulsion system

    NASA Technical Reports Server (NTRS)

    Nelson, D. P.; Bresnahan, D. L.

    1983-01-01

    Results are presented of wind tunnel tests conducted to verify the performance improvements of a refined ejector nozzle design for advanced supersonic transport propulsion systems. The analysis of results obtained at simulated engine operating conditions is emphasized. Tests were conducted with models of approximately 1/10th scale which were configured to simulate nozzle operation at takeoff, subsonic cruise, transonic cruise, and supersonic cruise. Transonic cruise operation was not a consideration during the nozzle design phase, although an evaluation at this condition was later conducted. Test results, characterized by thrust and flow coefficients, are given for a range of nozzle pressure ratios, emphasizing the thrust performance at the engine operating conditions predicted for each flight Mach number. The results indicate that nozzle performance goals were met or closely approximated at takeoff and supersonic cruise, while subsonic cruise performance was within 2.3 percent of the goal with further improvement possible.

  13. [Research advances in simulating regional crop growth under water stress by remote sensing].

    PubMed

    Zhang, Li; Wang, Shili; Ma, Yuping

    2005-06-01

    It is of practical significance to simulate the regional crop growth under water stress, especially at regional scale. Combined with remote sensing information, crop growth simulation model could provide an effective way to estimate the regional crop growth, development and yield formation under water stress. In this paper, related research methods and results were summarized, and some problems needed to be further studied and resolved were discussed.

  14. Scalable Iterative Solvers Applied to 3D Parallel Simulation of Advanced Semiconductor Devices

    NASA Astrophysics Data System (ADS)

    García-Loureiro, A. J.; Aldegunde, M.; Seoane, N.

    2009-08-01

    We have studied the performance of a preconditioned iterative solver to speed up a 3D semiconductor device simulator. Since 3D simulations necessitate large computing resources, the choice of algorithms and their parameters become of utmost importance. This code uses a density gradient drift-diffusion semiconductor transport model based on the finite element method which is one of the most general and complex discretisation techniques. It has been implemented for a distributed memory multiprocessor environment using the Message Passing Interface (MPI) library. We have applied this simulator to a 67 nm effective gate length Si MOSFET.

  15. Atomistic hybrid DSMC/NEMD method for nonequilibrium multiscale simulations

    NASA Astrophysics Data System (ADS)

    Gu, Kai; Watkins, Charles B.; Koplik, Joel

    2010-03-01

    A multiscale hybrid method for coupling the direct simulation Monte Carlo (DSMC) method to the nonequilibrium molecular dynamics (NEMD) method is introduced. The method addresses Knudsen layer type gas flows within a few mean free paths of an interface or about an object with dimensions of the order of a few mean free paths. It employs the NEMD method to resolve nanoscale phenomena closest to the interface along with coupled DSMC simulation of the remainder of the Knudsen layer. The hybrid DSMC/NEMD method is a particle based algorithm without a buffer zone. It incorporates a new, modified generalized soft sphere (MGSS) molecular collision model to improve the poor computational efficiency of the traditional generalized soft sphere GSS model and to achieve DSMC compatibility with Lennard-Jones NEMD molecular interactions. An equilibrium gas, a Fourier thermal flow, and an oscillatory Couette flow, are simulated to validate the method. The method shows good agreement with Maxwell-Boltzmann theory for the equilibrium system, Chapman-Enskog theory for Fourier flow, and pure DSMC simulations for oscillatory Couette flow. Speedup in CPU time of the hybrid solver is benchmarked against a pure NEMD solver baseline for different system sizes and solver domain partitions. Finally, the hybrid method is applied to investigate interaction of argon gas with solid surface molecules in a parametric study of the influence of wetting effects and solid molecular mass on energy transfer and thermal accommodation coefficients. It is determined that wetting effect strength and solid molecular mass have a significant impact on the energy transfer between gas and solid phases and thermal accommodation coefficient.

  16. Electron acceleration with advanced injection methods at the ASTRA laser

    NASA Astrophysics Data System (ADS)

    Poder, Kristjan; Carreira-Lopes, Nelson; Wood, Jonathan; Cole, Jason; Dangor, Bucker; Foster, Peta; Gopal, Ram; Kamperidis, Christos; Kononenko, Olena; Mangles, Stuart; Olgun, Halil; Palmer, Charlotte; Symes, Daniel; Pattathil, Rajeev; Najmudin, Zulfikar; Imperial College London Team; Central Laser Facility Collaboration; Tata InsituteFundamental Research Collaboration; DESY Collaboration

    2015-11-01

    Recent electron acceleration results from the ASTRA laser facility are presented. Experiments were performed using both the 40 TW ASTRA and the 350 TW ASTRA-Gemini laser. Fundamental electron beam properties relating to its quality were investigated both experimentally and with PIC simulations. For increased control over such parameters, various injection mechanisms such as self-injection and ionization injection were employed. Particular interest is given to the dynamics of ionization injected electrons in strongly driven wakes.

  17. Advanced Productivity Analysis Methods for Air Traffic Control Operations

    DTIC Science & Technology

    1976-12-01

    games, corporate -pianning models, freeway simulation, hospital simu- lation, etc. The types ofi users range from engineers an4 scientists to business...radio and interphone commnications and direct- voice commnication ). For each identified task, we selected a "reasonable" minimum task performance...search parameters. To compute the Work Activity actual task times (e.g., for interphone commnication , RDP/RDP operations, and flight strip processing

  18. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  19. Low dimensional gyrokinetic PIC simulation by δf method

    NASA Astrophysics Data System (ADS)

    Chen, C. M.; Nishimura, Yasutaro; Cheng, C. Z.

    2015-11-01

    A step by step development of our low dimensional gyrokinetic Particle-in-Cell (PIC) simulation is reported. One dimensional PIC simulation of Langmuir wave dynamics is benchmarked. We then take temporal plasma echo as a test problem to incorporate the δf method. Electrostatic driftwave simulation in one dimensional slab geometry is resumed in the presence of finite density gradients. By carefully diagnosing contour plots of the δf values in the phase space, we discuss the saturation mechanism of the driftwave instabilities. A v∥ formulation is employed in our new electromagnetic gyrokinetic method by solving Helmholtz equation for time derivative of the vector potential. Electron and ion momentum balance equations are employed in the time derivative of the Ampere's law. This work is supported by Ministry of Science and Technology of Taiwan, MOST 103-2112-M-006-007 and MOST 104-2112-M-006-019.

  20. Hybrid-CVFE method for flexible-grid reservoir simulation

    SciTech Connect

    Fung, L.S.K.; Buchanan, L.; Sharma, R. )

    1994-08-01

    Well flows and pressures are the most important boundary conditions in reservoir simulation. In a typical simulation, rapid changes and large pressure, temperature, saturation, and composition gradients occur in near-well regions. Treatment of these near-well phenomena significantly affects the accuracy of reservoir simulation results; therefore, extensive efforts have been devoted to the numerical treatment of wells and near-well flows. The flexible control-volume finite-element (CVFE) method is used to construct hybrid grids. The method involves use of a local cylindrical or elliptical grid to represent near-well flow accurately while honoring complex reservoir boundaries. The grid transition is smooth without any special discretization approximation, which eliminates the grid transition problem experienced with Cartesian local grid refinement and hybrid Cartesian gridding techniques.

  1. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  2. CFD Simulations of a Regenerative Process for Carbon Dioxide Capture in Advanced Gasification Based Power Systems

    SciTech Connect

    Arastoopour, Hamid; Abbasian, Javad

    2014-07-31

    the method of moments, called Finite size domain Complete set of trial functions Method Of Moments (FCMOM) was used to solve the population balance equations. The PBE model was implemented in a commercial CFD code, Ansys Fluent 13.0. The code was used to test the model in some simple cases and the results were verified against available analytical solution in the literature. Furthermore, the code was used to simulate CO2 capture in a packed-bed and the results were in excellent agreement with the experimental data obtained in the packed bed. The National Energy Laboratory (NETL) Carbon Capture Unit (C2U) design was used in simulate of the hydrodynamics of the cold flow gas/solid system (Clark et al.58). The results indicate that the pressure drop predicted by the model is in good agreement with the experimental data. Furthermore, the model was shown to be able to predict chugging behavior, which was observed during the experiment. The model was used as a base-case for simulations of reactive flow at elevated pressure and temperatures. The results indicate that by controlling the solid circulation rate, up to 70% CO2 removal can be achieved and that the solid hold up in the riser is one of the main factors controlling the extent of CO2 removal. The CFD/PBE simulation model indicates that by using a simulated syngas with a composition of 20% CO2, 20% H2O, 30% CO, and 30% H2, the composition (wet basis) in the reactor outlet corresponded to about 60% CO2 capture with and exit gas containing 65% H2. A preliminary base-case-design was developed for a regenerative MgO-based pre-combustion carbon capture process for a 500 MW IGCC power plant. To minimize the external energy requirement, an extensive heat integration network was developed in Aspen/HYSYS® to produce the steam required in the regenerator and heat integration. In this process, liquid CO2 produced at 50 atm can easily be pumped and sequestered or stored. The preliminary economic analyses indicate that the

  3. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H.; Gilinsky, Mikhail; Patel, Kaushal; Coston, Calvin; Blankson, Isaiah M.

    2003-01-01

    The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines. Results obtained are based on analytical methods, numerical simulations and experimental tests at the NASA LaRC and Hampton University computer complexes and experimental facilities. The main objective of this research is injection, mixing and combustion enhancement in propulsion systems. The sub-projects in the reporting period are: (A) Aero-performance and acoustics of Telescope-shaped designs. The work included a pylon set application for SCRAMJET. (B) An analysis of sharp-edged nozzle exit designs for effective fuel injection into the flow stream in air-breathing engines: triangular-round and diamond-round nozzles. (C) Measurement technique improvements for the HU Low Speed Wind Tunnel (HU LSWT) including an automatic data acquisition system and a two component (drag-lift) balance system. In addition, a course in the field of aerodynamics was developed for the teaching and training of HU students.

  4. Numeric Modified Adomian Decomposition Method for Power System Simulations

    SciTech Connect

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    2016-01-01

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested. It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.

  5. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    SciTech Connect

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  6. A demonstration of motion base design alternatives for the National Advanced Driving Simulator

    NASA Technical Reports Server (NTRS)

    Mccauley, Michael E.; Sharkey, Thomas J.; Sinacori, John B.; Laforce, Soren; Miller, James C.; Cook, Anthony

    1992-01-01

    A demonstration of the capability of NASA's Vertical Motion Simulator to simulate two alternative motion base designs for the National Advanced Driving simulator (NADS) is reported. The VMS is located at ARC. The motion base conditions used in this demonstration were as follows: (1) a large translational motion base; and (2) a motion base design with limited translational capability. The latter had translational capability representative of a typical synergistic motion platform. These alternatives were selected to test the prediction that large amplitude translational motion would result in a lower incidence or severity of simulator induced sickness (SIS) than would a limited translational motion base. A total of 10 drivers performed two tasks, slaloms and quick-stops, using each of the motion bases. Physiological, objective, and subjective measures were collected. No reliable differences in SIS between the motion base conditions was found in this demonstration. However, in light of the cost considerations and engineering challenges associated with implementing a large translation motion base, performance of a formal study is recommended.

  7. Space-based radar representation in the advanced warfighting simulation (AWARS)

    NASA Astrophysics Data System (ADS)

    Phend, Andrew E.; Buckley, Kathryn; Elliott, Steven R.; Stanley, Page B.; Shea, Peter M.; Rutland, Jimmie A.

    2004-09-01

    Space and orbiting systems impact multiple battlefield operating systems (BOS). Space support to current operations is a perfect example of how the United States fights. Satellite-aided munitions, communications, navigation and weather systems combine to achieve military objectives in a relatively short amount of time. Through representation of space capabilities within models and simulations, the military will have the ability to train and educate officers and soldiers to fight from the high ground of space or to conduct analysis and determine the requirements or utility of transformed forces empowered with advanced space-based capabilities. The Army Vice Chief of Staff acknowledged deficiencies in space modeling and simulation during the September 2001 Space Force Management Analsyis Review (FORMAL) and directed that a multi-disciplinary team be established to recommend a service-wide roadmap to address shortcomings. A Focus Area Collaborative Team (FACT), led by the U.S. Army Space & Missile Defense Command with participation across the Army, confirmed the weaknesses in scope, consistency, correctness, completeness, availability, and usability of space model and simulation (M&S) for Army applications. The FACT addressed the need to develop a roadmap to remedy Space M&S deficiencies using a highly parallelized process and schedule designed to support a recommendation during the Sep 02 meeting of the Army Model and Simulation Executive Council (AMSEC).

  8. Direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    SciTech Connect

    Carroll, C.C.; Owen, J.E.

    1988-05-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  9. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  10. Simulation as a Teaching Method in Family Communication Class.

    ERIC Educational Resources Information Center

    Parmenter, C. Irvin

    Simulation was used as a teaching method in a family communication class to foster a feeling of empathy with others. Although the course was originally designed to be taught as a seminar, the large number of students prompted the division of students into groups of five or six, characterized as families, each of which was to discuss concepts and…

  11. A multiscale quantum mechanics/electromagnetics method for device simulations.

    PubMed

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-07

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  12. Fast simulation method for airframe analysis based on big data

    NASA Astrophysics Data System (ADS)

    Liu, Dongliang; Zhang, Lixin

    2016-10-01

    In this paper, we employ the big data method to structural analysis by considering the correlations between loads and loads, loads and results and results and results. By means of fundamental mathematics and physical rules, the principle, feasibility and error control of the method are discussed. We then establish the analysis process and procedures. The method is validated by two examples. The results show that the fast simulation method based on big data is fast and precise when it is applied to structural analysis.

  13. On the potential of computational methods and numerical simulation in ice mechanics

    NASA Astrophysics Data System (ADS)

    Bergan, Pål G.; Cammaert, Gus; Skeie, Geir; Tharigopula, Venkatapathi

    2010-06-01

    This paper deals with the challenge of developing better methods and tools for analysing interaction between sea ice and structures and, in particular, to be able to calculate ice loads on these structures. Ice loads have traditionally been estimated using empirical data and "engineering judgment". However, it is believed that computational mechanics and advanced computer simulations of ice-structure interaction can play an important role in developing safer and more efficient structures, especially for irregular structural configurations. The paper explains the complexity of ice as a material in computational mechanics terms. Some key words here are large displacements and deformations, multi-body contact mechanics, instabilities, multi-phase materials, inelasticity, time dependency and creep, thermal effects, fracture and crushing, and multi-scale effects. The paper points towards the use of advanced methods like ALE formulations, mesh-less methods, particle methods, XFEM, and multi-domain formulations in order to deal with these challenges. Some examples involving numerical simulation of interaction and loads between level sea ice and offshore structures are presented. It is concluded that computational mechanics may prove to become a very useful tool for analysing structures in ice; however, much research is still needed to achieve satisfactory reliability and versatility of these methods.

  14. Numerical Simulation of Turbulent Flames using Vortex Methods.

    DTIC Science & Technology

    1987-10-05

    layer," Phys. Fluids , 30, pp. 706-721, 1987. (11) Ghoniem, A.F., and Knio, O.M., "Numerical Simulation of Flame Propagation in Constant Volume Chambers...1985. 4. "Numerical solution of a confined shear layer using vortex methods," The International Symposium on Computational Fluid Dynamics, Tokyo...Symposium on Computational Fluid Dynamics, Tokyo, Japan, September 1985. 8. "Application of Computational Methods in Turbulent Reacting Flow

  15. Efficient method for transport simulations in quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Maczka, Mariusz; Pawlowski, Stanislaw

    2016-12-01

    An efficient method for simulating quantum transport in quantum cascade lasers is presented. The calculations are performed within a simple approximation inspired by Büttiker probes and based on a finite model for semiconductor superlattices. The formalism of non-equilibrium Green's functions is applied to determine the selected transport parameters in a typical structure of a terahertz laser. Results were compared with those obtained for a infinite model as well as other methods described in literature.

  16. Advanced methods of microscope control using μManager software

    PubMed Central

    Edelstein, Arthur D.; Tsuchida, Mark A.; Amodaj, Nenad; Pinkard, Henry; Vale, Ronald D.; Stuurman, Nico

    2014-01-01

    μManager is an open-source, cross-platform desktop application, to control a wide variety of motorized microscopes, scientific cameras, stages, illuminators, and other microscope accessories. Since its inception in 2005, μManager has grown to support a wide range of microscopy hardware and is now used by thousands of researchers around the world. The application provides a mature graphical user interface and offers open programming interfaces to facilitate plugins and scripts. Here, we present a guide to using some of the recently added advanced μManager features, including hardware synchronization, simultaneous use of multiple cameras, projection of patterned light onto a specimen, live slide mapping, imaging with multi-well plates, particle localization and tracking, and high-speed imaging. PMID:25606571

  17. An advanced deterministic method for spent fuel criticality safety analysis

    SciTech Connect

    DeHart, M.D.

    1998-01-01

    Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.

  18. A driver linac for the Advanced Exotic Beam Laboratory : physics design and beam dynamics simulations.

    SciTech Connect

    Ostroumov, P. N.; Mustapha, B.; Nolen, J.; Physics

    2007-01-01

    The Advanced Exotic Beam Laboratory (AEBL) being developed at ANL consists of an 833 MV heavy-ion driver linac capable of producing uranium ions up to 200 MeV/u and protons to 580 MeV with 400 kW beam power. We have designed all accelerator components including a two charge state LEBT, an RFQ, a MEBT, a superconducting linac, a stripper station and chicane. We present the results of an optimized linac design and end-to-end simulations including machine errors and detailed beam loss analysis. The Advanced Exotic Beam Laboratory (AEBL) has been proposed at ANL as a reduced scale of the original Rare Isotope Accelerator (RIA) project with about half the cost but the same beam power. AEBL will address 90% or more of RIA physics but with reduced multi-users capabilities. The focus of this paper is the physics design and beam dynamics simulations of the AEBL driver linac. The reported results are for a multiple charge state U{sup 238} beam.

  19. Surrogate modeling of ultrasonic simulations using data-driven methods

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Grandin, Robert; Leifsson, Leifur

    2017-02-01

    Ultrasonic testing (UT) is used to detect internal flaws in materials and to characterize material properties. In many applications, computational simulations are an important part of the inspection-design and analysis processes. Having fast surrogate models for UT simulations is key for enabling efficient inverse analysis and model-assisted probability of detection (MAPOD). In many cases, it is impractical to perform the aforementioned tasks in a timely manner using current simulation models directly. Fast surrogate models can make these processes computationally tractable. This paper presents investigations of using surrogate modeling techniques to create fast approximate models of UT simulator responses. In particular, we propose to integrate data-driven methods (here, kriging interpolation with variable-fidelity models to construct an accurate and fast surrogate model. These techniques are investigated using test cases involving UT simulations of solid components immersed in a water bath during the inspection process. We will apply the full ultrasonic solver and the surrogate model to the detection and characterization of the flaw. The methods will be compared in terms of quality of the responses.

  20. Crystal level simulations using Eulerian finite element methods

    SciTech Connect

    Becker, R; Barton, N R; Benson, D J

    2004-02-06

    Over the last several years, significant progress has been made in the use of crystal level material models in simulations of forming operations. However, in Lagrangian finite element approaches simulation capabilities are limited in many cases by mesh distortion associated with deformation heterogeneity. Contexts in which such large distortions arise include: bulk deformation to strains approaching or exceeding unity, especially in highly anisotropic or multiphase materials; shear band formation and intersection of shear bands; and indentation with sharp indenters. Investigators have in the past used Eulerian finite element methods with material response determined from crystal aggregates to study steady state forming processes. However, Eulerian and Arbitrary Lagrangian-Eulerian (ALE) finite element methods have not been widely utilized for simulation of transient deformation processes at the crystal level. The advection schemes used in Eulerian and ALE codes control mesh distortion and allow for simulation of much larger total deformations. We will discuss material state representation issues related to advection and will present results from ALE simulations.

  1. Advances in rapid detection methods for foodborne pathogens.

    PubMed

    Zhao, Xihong; Lin, Chii-Wann; Wang, Jun; Oh, Deog Hwan

    2014-03-28

    Food safety is increasingly becoming an important public health issue, as foodborne diseases present a widespread and growing public health problem in both developed and developing countries. The rapid and precise monitoring and detection of foodborne pathogens are some of the most effective ways to control and prevent human foodborne infections. Traditional microbiological detection and identification methods for foodborne pathogens are well known to be time consuming and laborious as they are increasingly being perceived as insufficient to meet the demands of rapid food testing. Recently, various kinds of rapid detection, identification, and monitoring methods have been developed for foodborne pathogens, including nucleic-acid-based methods, immunological methods, and biosensor-based methods, etc. This article reviews the principles, characteristics, and applications of recent rapid detection methods for foodborne pathogens.

  2. System and Method for Finite Element Simulation of Helicopter Turbulence

    NASA Technical Reports Server (NTRS)

    McFarland, R. E. (Inventor); Dulsenberg, Ken (Inventor)

    1999-01-01

    The present invention provides a turbulence model that has been developed for blade-element helicopter simulation. This model uses an innovative temporal and geometrical distribution algorithm that preserves the statistical characteristics of the turbulence spectra over the rotor disc, while providing velocity components in real time to each of five blade-element stations along each of four blades. for a total of twenty blade-element stations. The simulator system includes a software implementation of flight dynamics that adheres to the guidelines for turbulence set forth in military specifications. One of the features of the present simulator system is that it applies simulated turbulence to the rotor blades of the helicopter, rather than to its center of gravity. The simulator system accurately models the rotor penetration into a gust field. It includes time correlation between the front and rear of the main rotor, as well as between the side forces felt at the center of gravity and at the tail rotor. It also includes features for added realism, such as patchy turbulence and vertical gusts in to which the rotor disc penetrates. These features are realized by a unique real time implementation of the turbulence filters. The new simulator system uses two arrays one on either side of the main rotor to record the turbulence field and to produce time-correlation from the front to the rear of the rotor disc. The use of Gaussian Interpolation between the two arrays maintains the statistical properties of the turbulence across the rotor disc. The present simulator system and method may be used in future and existing real-time helicopter simulations with minimal increase in computational workload.

  3. Comparison of advanced distillation control methods. Second annual report

    SciTech Connect

    1996-11-01

    Detailed dynamic simulations of three industrial distillation columns (a propylene/propane splitter, a xylene/toluene column, and a depropanizer) have been used to study the issue of configuration selection for diagonal PI dual composition controls. ATV identification with on-line detuning was used for tuning the diagonal PI composition controllers. Each configuration was evaluated with respect to steady-state RGA values, sensitivity to feed composition changes, and open loop dynamic performance. Each configuration was tuned using setpoint changes over a wider range of operation for robustness and tested for feed composition upsets. Overall, configuration selection was shown to have a dominant effect upon control performance. Configuration analysis tools (e.g., RGA, condition number, disturbance sensitivity), were found to reject configuration choices that are obviously poor choices, but were unable to critically differentiate between the remaining viable choices. Configuration selection guidelines are given although it is demonstrated that the most reliable configuration selection approach is based upon testing the viable configurations using dynamic column simulators.

  4. Comparison of advanced distillation control methods. Second annual report

    SciTech Connect

    Riggs, J.B.

    1996-11-01

    Detailed dynamic simulations of two industrial distillation columns (a propylene/propane splitter and a xylene/toluene column) have been used to study the issue of configuration selection for diagonal PI dual composition controls. Auto Tune Variation (ATV) identification with on-line detuning was used for tuning the diagonal proportional integral (PI) composition controls. Each configuration was evaluated with respect to steady-state relative gain array (RGA) values, sensitivity to feed composition changes, and open loop dynamic performance. Each configuration was tuned using setpoint changes over a wider range of operation for robustness and tested for feed composition upsets. Overall, configuration selection was shown to have a dominant effect upon control performance. Configuration analysis tools (e.g., RGA, condition number, disturbance sensitivity) were found to reject configuration choices that are obviously poor choices, but were unable to critically differentiate between the remaining viable choices. Configuration selection guidelines are given although it is demonstrated that the most reliable configuration selection approach is based upon testing the viable configurations using dynamic column simulators.

  5. Numerical simulation of the blast impact problem using the Direct Simulation Monte Carlo (DSMC) method

    NASA Astrophysics Data System (ADS)

    Sharma, Anupam; Long, Lyle N.

    2004-10-01

    A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.

  6. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H.; Gilinsky, Mikhail M.

    2001-01-01

    Three connected sub-projects were conducted under reported project. Partially, these sub-projects are directed to solving the problems conducted by the HU/FM&AL under two other NASA grants. The fundamental idea uniting these projects is to use untraditional 3D corrugated nozzle designs and additional methods for exhaust jet noise reduction without essential thrust lost and even with thrust augmentation. Such additional approaches are: (1) to add some solid, fluid, or gas mass at discrete locations to the main supersonic gas stream to minimize the negative influence of strong shock waves forming in propulsion systems; this mass addition may be accompanied by heat addition to the main stream as a result of the fuel combustion or by cooling of this stream as a result of the liquid mass evaporation and boiling; (2) to use porous or permeable nozzles and additional shells at the nozzle exit for preliminary cooling of exhaust hot jet and pressure compensation for non-design conditions (so-called continuous ejector with small mass flow rate; and (3) to propose and analyze new effective methods fuel injection into flow stream in air-breathing engines. Note that all these problems were formulated based on detailed descriptions of the main experimental facts observed at NASA Glenn Research Center. Basically, the HU/FM&AL Team has been involved in joint research with the purpose of finding theoretical explanations for experimental facts and the creation of the accurate numerical simulation technique and prediction theory for solutions for current problems in propulsion systems solved by NASA and Navy agencies. The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analysis for advanced aircraft and rocket engines. The F&AL Team uses analytical methods, numerical simulations, and possible experimental tests at the Hampton University campus. We will present some management activity

  7. Adherence to Scientific Method while Advancing Exposure Science

    EPA Science Inventory

    Paul Lioy was simultaneously a staunch adherent to the scientific method and an innovator of new ways to conduct science, particularly related to human exposure. Current challenges to science and the application of the scientific method are presented as they relate the approaches...

  8. Advances in validation, risk and uncertainty assessment of bioanalytical methods.

    PubMed

    Rozet, E; Marini, R D; Ziemons, E; Boulanger, B; Hubert, Ph

    2011-06-25

    Bioanalytical method validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application in order to trust the critical decisions that will be made with them. Even if several guidelines exist to help perform bioanalytical method validations, there is still the need to clarify the meaning and interpretation of bioanalytical method validation criteria and methodology. Yet, different interpretations can be made of the validation guidelines as well as for the definitions of the validation criteria. This will lead to diverse experimental designs implemented to try fulfilling these criteria. Finally, different decision methodologies can also be interpreted from these guidelines. Therefore, the risk that a validated bioanalytical method may be unfit for its future purpose will depend on analysts personal interpretation of these guidelines. The objective of this review is thus to discuss and highlight several essential aspects of methods validation, not only restricted to chromatographic ones but also to ligand binding assays owing to their increasing role in biopharmaceutical industries. The points that will be reviewed are the common validation criteria, which are selectivity, standard curve, trueness, precision, accuracy, limits of quantification and range, dilutional integrity and analyte stability. Definitions, methodology, experimental design and decision criteria are reviewed. Two other points closely connected to method validation are also examined: incurred sample reproducibility testing and measurement uncertainty as they are highly linked to bioanalytical results reliability. Their additional implementation is foreseen to strongly reduce the risk of having validated a bioanalytical method unfit for its purpose.

  9. [Recent advances in sample preparation methods of plant hormones].

    PubMed

    Wu, Qian; Wang, Lus; Wu, Dapeng; Duan, Chunfeng; Guan, Yafeng

    2014-04-01

    Plant hormones are a group of naturally occurring trace substances which play a crucial role in controlling the plant development, growth and environment response. With the development of the chromatography and mass spectroscopy technique, chromatographic analytical method has become a widely used way for plant hormone analysis. Among the steps of chromatographic analysis, sample preparation is undoubtedly the most vital one. Thus, a highly selective and efficient sample preparation method is critical for accurate identification and quantification of phytohormones. For the three major kinds of plant hormones including acidic plant hormones & basic plant hormones, brassinosteroids and plant polypeptides, the sample preparation methods are reviewed in sequence especially the recently developed methods. The review includes novel methods, devices, extractive materials and derivative reagents for sample preparation of phytohormones analysis. Especially, some related works of our group are included. At last, the future developments in this field are also prospected.

  10. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Liebig, Mark; Franzluebbers, Alan J.; Follett, Ronald F.; Hively, W. Dean; Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco

    2012-01-01

    The gold standard for soil C determination is combustion. However, this method requires expensive consumables, is limited to the determination of the total carbon and in the number of samples which can be processed (~100/d). With increased interest in soil C sequestration, faster methods are needed. Thus, interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared ranges using either proximal or remote sensing. These methods have the ability to analyze more samples (2 to 3X/d) or huge areas (imagery) and do multiple analytes simultaneously, but require calibrations relating spectral and reference data and have specific problems, i.e., remote sensing is capable of scanning entire watersheds, thus reducing the sampling needed, but is limiting to the surface layer of tilled soils and by difficulty in obtaining proper calibration reference values. The objective of this discussion is the present state of spectroscopic methods for soil C determination.

  11. Non-equilibrium Green function method: theory and application in simulation of nanometer electronic devices

    NASA Astrophysics Data System (ADS)

    Do, Van-Nam

    2014-09-01

    We review fundamental aspects of the non-equilibrium Green function method in the simulation of nanometer electronic devices. The method is implemented into our recently developed computer package OPEDEVS to investigate transport properties of electrons in nano-scale devices and low-dimensional materials. Concretely, we present the definition of the four real-time Green functions, the retarded, advanced, lesser and greater functions. Basic relations among these functions and their equations of motion are also presented in detail as the basis for the performance of analytical and numerical calculations. In particular, we review in detail two recursive algorithms, which are implemented in OPEDEVS to solve the Green functions defined in finite-size opened systems and in the surface layer of semi-infinite homogeneous ones. Operation of the package is then illustrated through the simulation of the transport characteristics of a typical semiconductor device structure, the resonant tunneling diodes.

  12. Smoothed Profile Method to Simulate Colloidal Particles in Complex Fluids

    NASA Astrophysics Data System (ADS)

    Yamamoto, Ryoichi; Nakayama, Yasuya; Kim, Kang

    A new direct numerical simulation scheme, called "Smoothed Profile (SP) method," is presented. The SP method, as a direct numerical simulation of particulate flow, provides a way to couple continuum fluid dynamics with rigid-body dynamics through smoothed profile of colloidal particle. Our formulation includes extensions to colloids in multicomponent solvents such as charged colloids in electrolyte solutions. This method enables us to compute the time evolutions of colloidal particles, ions, and host fluids simultaneously by solving Newton, advection-diffusion, and Navier-Stokes equations so that the electro-hydrodynamic couplings can be fully taken into account. The electrophoretic mobilities of charged spherical particles are calculated in several situations. The comparisons with approximation theories show quantitative agreements for dilute dispersions without any empirical parameters.

  13. Simulation of ground motion using the stochastic method

    USGS Publications Warehouse

    Boore, D.M.

    2003-01-01

    A simple and powerful method for simulating ground motions is to combine parametric or functional descriptions of the ground motion's amplitude spectrum with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to the distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers (generally, f>0.1 Hz), and it is widely used to predict ground motions for regions of the world in which recordings of motion from potentially damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude and in diverse tectonic environments. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms. This provides a means by which the results of the rigorous studies reported in other papers in this volume can be incorporated into practical predictions of ground motion.

  14. Preface: Special Topic Section on Advanced Electronic Structure Methods for Solids and Surfaces

    SciTech Connect

    Michaelides, Angelos; Martinez, Todd J.; Alavi, Ali; Kresse, Georg

    2015-09-14

    This Special Topic section on Advanced Electronic Structure Methods for Solids and Surfaces contains a collection of research papers that showcase recent advances in the high accuracy prediction of materials and surface properties. It provides a timely snapshot of a growing field that is of broad importance to chemistry, physics, and materials science.

  15. Assessment of Crack Detection in Cast Austenitic Piping Components Using Advanced Ultrasonic Methods.

    SciTech Connect

    Anderson, Michael T.; Crawford, Susan L.; Cumblidge, Stephen E.; Diaz, Aaron A.; Doctor, Steven R.

    2007-01-01

    Studies conducted at the Pacific N¬orthwest National Laboratory (PNNL) in Richland, Washington, have focused on developing and evaluating the reliability of nondestructive examination (NDE) approaches for inspecting coarse-grained, cast stainless steel reactor components. The objective of this work is to provide information to the United States Nuclear Regulatory Commission (NRC) on the utility, effec¬tiveness and limitations of ultrasonic testing (UT) inspection techniques as related to the in-service inspec¬tion of primary system piping components in pressurized water reactors (PWRs). Cast stainless steel pipe specimens were examined that contain thermal and mechanical fatigue cracks located close to the weld roots and have inside/outside surface geometrical conditions that simulate several PWR primary piping configurations. In addition, segments of vintage centrifugally cast piping were also examined to understand inherent acoustic noise and scattering due to grain structures and determine consistency of UT responses from different locations. The advanced UT methods were applied from the outside surface of these specimens using automated scanning devices and water coupling. The low-frequency ultrasonic method employed a zone-focused, multi-incident angle inspection protocol (operating at 250-450 kHz) coupled with a synthetic aperture focusing technique (SAFT) for improved signal-to-noise and advanced imaging capabilities. The phased array approach was implemented with a modified instrument operating at 500 kHz and composite volumetric images of the specimens were generated. Re¬sults from laboratory studies for assessing detection, localization and sizing effectiveness are discussed in this paper.

  16. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  17. Health, wealth, and air pollution: advancing theory and methods.

    PubMed Central

    O'Neill, Marie S; Jerrett, Michael; Kawachi, Ichiro; Levy, Jonathan I; Cohen, Aaron J; Gouveia, Nelson; Wilkinson, Paul; Fletcher, Tony; Cifuentes, Luis; Schwartz, Joel

    2003-01-01

    The effects of both ambient air pollution and socioeconomic position (SEP) on health are well documented. A limited number of recent studies suggest that SEP may itself play a role in the epidemiology of disease and death associated with exposure to air pollution. Together with evidence that poor and working-class communities are often more exposed to air pollution, these studies have stimulated discussion among scientists, policy makers, and the public about the differential distribution of the health impacts from air pollution. Science and public policy would benefit from additional research that integrates the theory and practice from both air pollution and social epidemiologies to gain a better understanding of this issue. In this article we aim to promote such research by introducing readers to methodologic and conceptual approaches in the fields of air pollution and social epidemiology; by proposing theories and hypotheses about how air pollution and socioeconomic factors may interact to influence health, drawing on studies conducted worldwide; by discussing methodologic issues in the design and analysis of studies to determine whether health effects of exposure to ambient air pollution are modified by SEP; and by proposing specific steps that will advance knowledge in this field, fill information gaps, and apply research results to improve public health in collaboration with affected communities. PMID:14644658

  18. Nuclear methods of analysis in the advanced neutron source

    SciTech Connect

    Robinson, L.; Dyer, F.F.

    1994-12-31

    The Advanced Neutron Source (ANS) research reactor is presently in the conceptual design phase. The thermal power of this heavy water cooled and moderated reactor will be about 350 megawatts. The core volume of 27 liter is designed to provide the optimum neutron fluence rate for the numerous experimental facilities. The peak thermal neutron fluence rate is expected to be slightly less than 10{sup 20} neutrons/m{sup 2}s. In addition to the more than 40 neutron scattering stations, there will be extensive facilities for isotope production, material irradiation and analytical chemistry including neutron activation analysis (NAA) and a slow positron source. The highlight of this reactor will be the capability that it will provide for conducting research using cold neutrons. Two cryostats containing helium-cooled liquid deuterium will be located in the heavy water reflector tank. Each cryostat will provide low-temperature neutrons to researchers via numerous guides. A hot source with two beam tubes and several thermal beam tubes will also be available. The NAA facilities in the ANS will consist of seven pneumatic tubes, one cold neutron guide for prompt gamma-ray neutron activation analysis (PGNAA), and one cold neutron slanted guide for neutron depth profiling (NDP). In addition to these neutron interrogation systems, a gamma-ray irradiation facility for materials testing will be housed in a spent fuel storage pool. This paper will provide detailed information regarding the design and use of these various experimental systems.

  19. Investigation of advancing front method for generating unstructured grid

    NASA Astrophysics Data System (ADS)

    Thomas, A. M.; Tiwari, S. N.

    1992-06-01

    The advancing front technique is used to generate an unstructured grid about simple aerodynamic geometries. Unstructured grids are generated using VGRID2D and VGRID3D software. Specific problems considered are a NACA 0012 airfoil, a bi-plane consisting of two NACA 0012 airfoil, a four element airfoil in its landing configuration, and an ONERA M6 wing. Inviscid time dependent solutions are computed on these geometries using USM3D and the results are compared with standard test results obtained by other investigators. A grid convergence study is conducted for the NACA 0012 airfoil and compared with a structured grid. A structured grid is generated using GRIDGEN software and inviscid solutions computed using CFL3D flow solver. The results obtained by unstructured grid for NACA 0012 airfoil showed an asymmetric distribution of flow quantities, and a fine distribution of grid was required to remove this asymmetry. On the other hand, the structured grid predicted a very symmetric distribution, but when the total number of points were compared to obtain the same results it was seen that structured grid required more grid points.

  20. Investigation of advancing front method for generating unstructured grid

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1992-01-01

    The advancing front technique is used to generate an unstructured grid about simple aerodynamic geometries. Unstructured grids are generated using VGRID2D and VGRID3D software. Specific problems considered are a NACA 0012 airfoil, a bi-plane consisting of two NACA 0012 airfoil, a four element airfoil in its landing configuration, and an ONERA M6 wing. Inviscid time dependent solutions are computed on these geometries using USM3D and the results are compared with standard test results obtained by other investigators. A grid convergence study is conducted for the NACA 0012 airfoil and compared with a structured grid. A structured grid is generated using GRIDGEN software and inviscid solutions computed using CFL3D flow solver. The results obtained by unstructured grid for NACA 0012 airfoil showed an asymmetric distribution of flow quantities, and a fine distribution of grid was required to remove this asymmetry. On the other hand, the structured grid predicted a very symmetric distribution, but when the total number of points were compared to obtain the same results it was seen that structured grid required more grid points.

  1. The Saccharomyces Genome Database: Advanced Searching Methods and Data Mining.

    PubMed

    Cherry, J Michael

    2015-12-02

    At the core of the Saccharomyces Genome Database (SGD) are chromosomal features that encode a product. These include protein-coding genes and major noncoding RNA genes, such as tRNA and rRNA genes. The basic entry point into SGD is a gene or open-reading frame name that leads directly to the locus summary information page. A keyword describing function, phenotype, selective condition, or text from abstracts will also provide a door into the SGD. A DNA or protein sequence can be used to identify a gene or a chromosomal region using BLAST. Protein and DNA sequence identifiers, PubMed and NCBI IDs, author names, and function terms are also valid entry points. The information in SGD has been gathered and is maintained by a group of scientific biocurators and software developers who are devoted to providing researchers with up-to-date information from the published literature, connections to all the major research resources, and tools that allow the data to be explored. All the collected information cannot be represented or summarized for every possible question; therefore, it is necessary to be able to search the structured data in the database. This protocol describes the YeastMine tool, which provides an advanced search capability via an interactive tool. The SGD also archives results from microarray expression experiments, and a strategy designed to explore these data using the SPELL (Serial Pattern of Expression Levels Locator) tool is provided.

  2. Advanced scanning methods with tracking optical coherence tomography

    PubMed Central

    Ferguson, R. Daniel; Iftimia, Nicusor V.; Ustun, Teoman; Wollstein, Gadi; Ishikawa, Hiroshi; Gabriele, Michelle L.; Dilworth, William D.; Kagemann, Larry; Schuman, Joel S.

    2013-01-01

    An upgraded optical coherence tomography system with integrated retinal tracker (TOCT) was developed. The upgraded system uses improved components to extend the tracking bandwidth, fully integrates the tracking hardware into the optical head of the clinical OCT system, and operates from a single software platform. The system was able to achieve transverse scan registration with sub-pixel accuracy (~10 μm). We demonstrate several advanced scan sequences with the TOCT, including composite scans averaged (co-added) from multiple B-scans taken consecutively and several hours apart, en face images collected by summing the A-scans of circular, line, and raster scans, and three-dimensional (3D) retinal maps of the fovea and optic disc. The new system achieves highly accurate OCT scan registration yielding composite images with significantly improved spatial resolution, increased signal-to-noise ratio, and reduced speckle while maintaining well-defined boundaries and sharp fine structure compared to single scans. Precise re-registration of multiple scans over separate imaging sessions demonstrates TOCT utility for longitudinal studies. En face images and 3D data cubes generated from these data reveal high fidelity image registration with tracking, despite scan durations of more than one minute. PMID:19498823

  3. How to qualify and validate wear simulation devices and methods.

    PubMed

    Heintze, S D

    2006-08-01

    The clinical significance of increased wear can mainly be attributed to impaired aesthetic appearance and/or functional restrictions. Little is known about the systemic effects of swallowed or inhaled worn particles that derive from restorations. As wear measurements in vivo are complicated and time-consuming, wear simulation devices and methods had been developed without, however, systematically looking at the factors that influence important wear parameters. Wear simulation devices shall simulate processes that occur in the oral cavity during mastication, namely force, force profile, contact time, sliding movement, clearance of worn material, etc. Different devices that use different force actuator principles are available. Those with the highest citation frequency in the literature are - in descending order - the Alabama, ACTA, OHSU, Zurich and MTS wear simulators. When following the FDA guidelines on good laboratory practice (GLP) only the expensive MTS wear simulator is a qualified machine to test wear in vitro; the force exerted by the hydraulic actuator is controlled and regulated during all movements of the stylus. All the other simulators lack control and regulation of force development during dynamic loading of the flat specimens. This may be an explanation for the high coefficient of variation of the results in some wear simulators (28-40%) and the poor reproducibility of wear results if dental databases are searched for wear results of specific dental materials (difference of 22-72% for the same material). As most of the machines are not qualifiable, wear methods applying the machine may have a sound concept but cannot be validated. Only with the MTS method have wear parameters and influencing factors been documented and verified. A good compromise with regard to costs, practicability and robustness is the Willytec chewing simulator, which uses weights as force actuator and step motors for vertical and lateral movements. The Ivoclar wear method run on

  4. Comparison of advanced distillation control methods. Third annual report

    SciTech Connect

    Riggs, J.B.

    1997-07-01

    Detailed dynamic simulations of three industrial distillation columns (a propylene/propane splitter, a xylene/toluene column, and a depropanizer) have been used to study the issue of configuration selection for diagonal PI dual composition controls, feedforward from a feed composition analyzer, and decouplers. Auto Tune Variation (ATV) identification with on-line detuning for setpoint changes was used for tuning the diagonal proportional integral (PI) composition controls. In addition, robustness tests were conducted by inducting reboiler duty upsets. For single composition control, the (L, V) configuration was found to be best. For dual composition control, the optimum configuration changes from one column to another. Moreover, the use of analysis tools, such as RGA, appears to be of little value in identifying the optimum configuration for dual composition control. Using feedforward from a feed composition analyzer and using decouplers are shown to offer significant advantages for certain specific cases.

  5. Investigating the Limitations of Advanced Design Methods through Real World Application

    DTIC Science & Technology

    2016-03-31

    of Aerospace Engineering Doc ID#: 116361 Aerospace Systems Design Laboratory (ASDL) 275 Ferst Drive Atlanta, GA 30332-0150 9. SPONSORING I...SUPPLEMENTARY NOTES 14. ABSTRACT This final report details the results of the partnership between the Aerospace Systems Design Laboratory (ASDL) at the...architectures. 1S. SUBJECT TERMS Naval Engineering, Advanced Systems Design , Modeling & Simulation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION

  6. Prediction of Dynamic Stall Characteristics Using Advanced Nonlinear Panel Methods,

    DTIC Science & Technology

    This paper presents preliminary results of work in which a surface singularity panel method is being extended for modelling the dynamic interaction...between a separated wake and a surface undergoing an unsteady motion. The method combines the capabilities of an unsteady time-stepping code and a... technique for modelling extensive separation using free vortex sheets. Routines are developed for treating the dynamic interaction between the separated

  7. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean

    2012-01-01

    The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.

  8. Simulated scaling method for localized enhanced sampling and simultaneous "alchemical" free energy simulations: a general method for molecular mechanical, quantum mechanical, and quantum mechanical/molecular mechanical simulations.

    PubMed

    Li, Hongzhi; Fajer, Mikolai; Yang, Wei

    2007-01-14

    A potential scaling version of simulated tempering is presented to efficiently sample configuration space in a localized region. The present "simulated scaling" method is developed with a Wang-Landau type of updating scheme in order to quickly flatten the distributions in the scaling parameter lambdam space. This proposal is meaningful for a broad range of biophysical problems, in which localized sampling is required. Besides its superior capability and robustness in localized conformational sampling, this simulated scaling method can also naturally lead to efficient "alchemical" free energy predictions when dual-topology alchemical hybrid potential is applied; thereby simultaneously, both of the chemically and conformationally distinct portions of two end point chemical states can be efficiently sampled. As demonstrated in this work, the present method is also feasible for the quantum mechanical and quantum mechanical/molecular mechanical simulations.

  9. Annoyance response to simulated advanced turboprop aircraft interior noise containing tonal beats

    NASA Technical Reports Server (NTRS)

    Leatherwood, Jack D.

    1987-01-01

    A study is done to investigate the effects on subjective annoyance of simulated advanced turboprop (ATP) interior noise environments containing tonal beats. The simulated environments consisted of low-frequency tones superimposed on a turbulent-boundary-layer noise spectrum. The variables used in the study included propeller tone frequency (100 to 250 Hz), propeller tone levels (84 to 105 dB), and tonal beat frequency (0 to 1.0 Hz). Results indicated that propeller tones within the simulated ATP environment resulted in increased annoyance response that was fully predictable in terms of the increase in overall sound pressure level due to the tones. Implications for ATP aircraft include the following: (1) the interior noise environment with propeller tones is more annoying than an environment without tones if the tone is present at a level sufficient to increase the overall sound pressure level; (2) the increased annoyance due to the fundamental propeller tone frequency without harmonics is predictable from the overall sound pressure level; and (3) no additional noise penalty due to the perception of single discrete-frequency tones and/or beats was observed.

  10. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  11. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  12. Simulation of Thin-Film Damping and Thermal Mechanical Noise Spectra for Advanced Micromachined Microphone Structures

    PubMed Central

    Hall, Neal A.; Okandan, Murat; Littrell, Robert; Bicen, Baris; Degertekin, F. Levent

    2008-01-01

    In many micromachined sensors the thin (2–10 μm thick) air film between a compliant diaphragm and backplate electrode plays a dominant role in shaping both the dynamic and thermal noise characteristics of the device. Silicon microphone structures used in grating-based optical-interference microphones have recently been introduced that employ backplates with minimal area to achieve low damping and low thermal noise levels. Finite-element based modeling procedures based on 2-D discretization of the governing Reynolds equation are ideally suited for studying thin-film dynamics in such structures which utilize relatively complex backplate geometries. In this paper, the dynamic properties of both the diaphragm and thin air film are studied using a modal projection procedure in a commonly used finite element software and the results are used to simulate the dynamic frequency response of the coupled structure to internally generated electrostatic actuation pressure. The model is also extended to simulate thermal mechanical noise spectra of these advanced sensing structures. In all cases simulations are compared with measured data and show excellent agreement—demonstrating 0.8 pN/√Hz and 1.8 μPa/√Hz thermal force and thermal pressure noise levels, respectively, for the 1.5 mm diameter structures under study which have a fundamental diaphragm resonance-limited bandwidth near 20 kHz. PMID:19081811

  13. Simulation of Thin-Film Damping and Thermal Mechanical Noise Spectra for Advanced Micromachined Microphone Structures.

    PubMed

    Hall, Neal A; Okandan, Murat; Littrell, Robert; Bicen, Baris; Degertekin, F Levent

    2008-06-01

    In many micromachined sensors the thin (2-10 μm thick) air film between a compliant diaphragm and backplate electrode plays a dominant role in shaping both the dynamic and thermal noise characteristics of the device. Silicon microphone structures used in grating-based optical-interference microphones have recently been introduced that employ backplates with minimal area to achieve low damping and low thermal noise levels. Finite-element based modeling procedures based on 2-D discretization of the governing Reynolds equation are ideally suited for studying thin-film dynamics in such structures which utilize relatively complex backplate geometries. In this paper, the dynamic properties of both the diaphragm and thin air film are studied using a modal projection procedure in a commonly used finite element software and the results are used to simulate the dynamic frequency response of the coupled structure to internally generated electrostatic actuation pressure. The model is also extended to simulate thermal mechanical noise spectra of these advanced sensing structures. In all cases simulations are compared with measured data and show excellent agreement-demonstrating 0.8 pN/√Hz and 1.8 μPa/√Hz thermal force and thermal pressure noise levels, respectively, for the 1.5 mm diameter structures under study which have a fundamental diaphragm resonance-limited bandwidth near 20 kHz.

  14. The advance of non-invasive detection methods in osteoarthritis

    NASA Astrophysics Data System (ADS)

    Dai, Jiao; Chen, Yanping

    2011-06-01

    Osteoarthritis (OA) is one of the most prevalent chronic diseases which badly affected the patients' living quality and economy. Detection and evaluation technology can provide basic information for early treatment. A variety of imaging methods in OA were reviewed, such as conventional X-ray, computed tomography (CT), ultrasound (US), magnetic resonance imaging (MRI) and near-infrared spectroscopy (NIRS). Among the existing imaging modalities, the spatial resolution of X-ray is extremely high; CT is a three-dimensional method, which has high density resolution; US as an evaluation method of knee OA discriminates lesions sensitively between normal cartilage and degenerative one; as a sensitive and nonionizing method, MRI is suitable for the detection of early OA, but the cost is too expensive for routine use; NIRS is a safe, low cost modality, and is also good at detecting early stage OA. In a word, each method has its own advantages, but NIRS is provided with broader application prospect, and it is likely to be used in clinical daily routine and become the golden standard for diagnostic detection.

  15. Meshless lattice Boltzmann method for the simulation of fluid flows.

    PubMed

    Musavi, S Hossein; Ashrafizaadeh, Mahmud

    2015-02-01

    A meshless lattice Boltzmann numerical method is proposed. The collision and streaming operators of the lattice Boltzmann equation are separated, as in the usual lattice Boltzmann models. While the purely local collision equation remains the same, we rewrite the streaming equation as a pure advection equation and discretize the resulting partial differential equation using the Lax-Wendroff scheme in time and the meshless local Petrov-Galerkin scheme based on augmented radial basis functions in space. The meshless feature of the proposed method makes it a more powerful lattice Boltzmann solver, especially for cases in which using meshes introduces significant numerical errors into the solution, or when improving the mesh quality is a complex and time-consuming process. Three well-known benchmark fluid flow problems, namely the plane Couette flow, the circular Couette flow, and the impulsively started cylinder flow, are simulated for the validation of the proposed method. Excellent agreement with analytical solutions or with previous experimental and numerical results in the literature is observed in all the simulations. Although the computational resources required for the meshless method per node are higher compared to that of the standard lattice Boltzmann method, it is shown that for cases in which the total number of nodes is significantly reduced, the present method actually outperforms the standard lattice Boltzmann method.

  16. Application of advanced reliability methods to local strain fatigue analysis

    NASA Technical Reports Server (NTRS)

    Wu, T. T.; Wirsching, P. H.

    1983-01-01

    When design factors are considered as random variables and the failure condition cannot be expressed by a closed form algebraic inequality, computations of risk (or probability of failure) might become extremely difficult or very inefficient. This study suggests using a simple, and easily constructed, second degree polynomial to approximate the complicated limit state in the neighborhood of the design point; a computer analysis relates the design variables at selected points. Then a fast probability integration technique (i.e., the Rackwitz-Fiessler algorithm) can be used to estimate risk. The capability of the proposed method is demonstrated in an example of a low cycle fatigue problem for which a computer analysis is required to perform local strain analysis to relate the design variables. A comparison of the performance of this method is made with a far more costly Monte Carlo solution. Agreement of the proposed method with Monte Carlo is considered to be good.

  17. ADVANCED URBANIZED METEOROLOGICAL MODELING AND AIR QUALITY SIMULATIONS WITH CMAQ AT NEIGHBORHOOD SCALES

    EPA Science Inventory

    We present results from a study testing the new boundary layer parameterization method, the canopy drag approach (DA) which is designed to explicitly simulate the effects of buildings, street and tree canopies on the dynamic, thermodynamic structure and dispersion fields in urban...

  18. Supernova Simulations with Boltzmann Neutrino Transport: A Comparison of Methods

    SciTech Connect

    Liebendoerfer, M.; Rampp, M.; Janka, H.-Th.; Mezzacappa, Anthony

    2005-02-01

    Accurate neutrino transport has been built into spherically symmetric simulations of stellar core collapse and postbounce evolution. The results of such simulations agree that spherically symmetric models with standard microphysical input fail to explode by the delayed, neutrino-driven mechanism. Independent groups implemented fundamentally different numerical methods to tackle the Boltzmann neutrino transport equation. Here we present a direct and detailed comparison of such neutrino radiation-hydrodynamics simulations for two codes, AGILE-BOLTZTRAN of the Oak Ridge-Basel group and VERTEX of the Garching group. The former solves the Boltzmann equation directly by an implicit, general relativistic discrete-angle method on the adaptive grid of a conservative implicit hydrodynamics code with second-order TVD advection. In contrast, the latter couples a variable Eddington factor technique with an explicit, moving-grid, conservative high-order Riemann solver with important relativistic effects treated by an effective gravitational potential. The presented study is meant to test our neutrino radiation-hydrodynamics implementations and to provide a data basis for comparisons and verifications of supernova codes to be developed in the future. Results are discussed for simulations of the core collapse and postbounce evolution of a 13 M{sub {circle_dot}} star with Newtonian gravity and a 15 M{sub {circle_dot}} star with relativistic gravity.

  19. Use of advanced particle methods in modeling space propulsion and its supersonic expansions

    NASA Astrophysics Data System (ADS)

    Borner, Arnaud

    This research discusses the use of advanced kinetic particle methods such as Molecular Dynamics (MD) and direct simulation Monte Carlo (DSMC) to model space propulsion systems such as electrospray thrusters and their supersonic expansions. MD simulations are performed to model an electrospray thruster for the ionic liquid (IL) EMIM--BF4 using coarse-grained (CG) potentials. The model is initially featuring a constant electric field applied in the longitudinal direction. Two coarse-grained potentials are compared, and the effective-force CG (EFCG) potential is found to predict the formation of the Taylor cone, the cone-jet, and other extrusion modes for similar electric fields and mass flow rates observed in experiments of a IL fed capillary-tip-extractor system better than the simple CG potential. Later, one-dimensional and fully transient three-dimensional electric fields, the latter solving Poisson's equation to take into account the electric field due to space charge at each timestep, are computed by coupling the MD model to a Poisson solver. It is found that the inhomogeneous electric field as well as that of the IL space-charge improve agreement between modeling and experiment. The boundary conditions (BCs) are found to have a substantial impact on the potential and electric field, and the tip BC is introduced and compared to the two previous BCs, named plate and needle, showing good improvement by reducing unrealistically high radial electric fields generated in the vicinity of the capillary tip. The influence of the different boundary condition models on charged species currents as a function of the mass flow rate is studied, and it is found that a constant electric field model gives similar agreement to the more rigorous and computationally expensive tip boundary condition at lower flow rates. However, at higher mass flow rates the MD simulations with the constant electric field produces extruded particles with higher Coulomb energy per ion, consistent with

  20. Protein Microarrays with Novel Microfluidic Methods: Current Advances.

    PubMed

    Dixit, Chandra K; Aguirre, Gerson R

    2014-07-01

    Microfluidic-based micromosaic technology has allowed the pattering of recognition elements in restricted micrometer scale areas with high precision. This controlled patterning enabled the development of highly multiplexed arrays multiple analyte detection. This arraying technology was first introduced in the beginning of 2001 and holds tremendous potential to revolutionize microarray development and analyte detection. Later, several microfluidic methods were developed for microarray application. In this review we discuss these novel methods and approaches which leverage the property of microfluidic technologies to significantly improve various physical aspects of microarray technology, such as enhanced imprinting homogeneity, stability of the immobilized biomolecules, decreasing assay times, and reduction of the costs and of the bulky instrumentation.

  1. Advances in dual algorithms and convex approximation methods

    NASA Technical Reports Server (NTRS)

    Smaoui, H.; Fleury, C.; Schmit, L. A.

    1988-01-01

    A new algorithm for solving the duals of separable convex optimization problems is presented. The algorithm is based on an active set strategy in conjunction with a variable metric method. This first order algorithm is more reliable than Newton's method used in DUAL-2 because it does not break down when the Hessian matrix becomes singular or nearly singular. A perturbation technique is introduced in order to remove the nondifferentiability of the dual function which arises when linear constraints are present in the approximate problem.

  2. Recent advances in the modeling of plasmas with the Particle-In-Cell methods

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv

    2015-11-01

    The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.

  3. Remembrance of phases past: An autoregressive method for generating realistic atmospheres in simulations

    NASA Astrophysics Data System (ADS)

    Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.

    2014-08-01

    The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.

  4. An Efficient Ray-Tracing Method for Determining Terrain Intercepts in EDL Simulations

    NASA Technical Reports Server (NTRS)

    Shidner, Jeremy D.

    2016-01-01

    The calculation of a ray's intercept from an arbitrary point in space to a prescribed surface is a common task in computer simulations. The arbitrary point often represents an object that is moving according to the simulation, while the prescribed surface is fixed in a defined frame. For detailed simulations, this surface becomes complex, taking the form of real-world objects such as mountains, craters or valleys which require more advanced methods to accurately calculate a ray's intercept location. Incorporation of these complex surfaces has commonly been implemented in graphics systems that utilize highly optimized graphics processing units to analyze such features. This paper proposes a simplified method that does not require computationally intensive graphics solutions, but rather an optimized ray-tracing method for an assumed terrain dataset. This approach was developed for the Mars Science Laboratory mission which landed on the complex terrain of Gale Crater. First, this paper begins with a discussion of the simulation used to implement the model and the applicability of finding surface intercepts with respect to atmosphere modeling, altitude determination, radar modeling, and contact forces influencing vehicle dynamics. Next, the derivation and assumptions of the intercept finding method are presented. Key assumptions are noted making the routines specific to only certain types of surface data sets that are equidistantly spaced in longitude and latitude. The derivation of the method relies on ray-tracing, requiring discussion on the formulation of the ray with respect to the terrain datasets. Further discussion includes techniques for ray initialization in order to optimize the intercept search. Then, the model implementation for various new applications in the simulation are demonstrated. Finally, a validation of the accuracy is presented along with the corresponding data sets used in the validation. A performance summary of the method will be shown using

  5. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Gilinsky, Mikhail; Morgan, Morris H.; Povitsky, Alex; Schkolnikov, Natalia; Njoroge, Norman; Coston, Calvin; Blankson, Isaiah M.

    2001-01-01

    The Fluid Mechanics and Acoustics Laboratory at Hampton University (HU/FM&AL) jointly with the NASA Glenn Research Center has conducted four connected subprojects under the reporting project. Basically, the HU/FM&AL Team has been involved in joint research with the purpose of theoretical explanation of experimental facts and creation of accurate numerical simulation techniques and prediction theory for solution of current problems in propulsion systems of interest to the NAVY and NASA agencies. This work is also supported by joint research between the NASA GRC and the Institute of Mechanics at Moscow State University (IM/MSU) in Russia under a CRDF grant. The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines. The FM&AL Team uses analytical methods, numerical simulations and possible experimental tests at the Hampton University campus. The fundamental idea uniting these subprojects is to use nontraditional 3D corrugated and composite nozzle and inlet designs and additional methods for exhaust jet noise reduction without essential thrust loss and even with thrust augmentation. These subprojects are: (1) Aeroperformance and acoustics of Bluebell-shaped and Telescope-shaped designs; (2) An analysis of sharp-edged nozzle exit designs for effective fuel injection into the flow stream in air-breathing engines: triangular-round, diamond-round and other nozzles; (3) Measurement technique improvement for the HU Low Speed Wind Tunnel; a new course in the field of aerodynamics, teaching and training of HU students; experimental tests of Mobius-shaped screws: research and training; (4) Supersonic inlet shape optimization. The main outcomes during this reporting period are: (l) Publications: The AIAA Paper #00-3170 was presented at the 36th AIAA/ASME/SAE/ASEE Joint Propulsion Conference, 17-19 June, 2000, Huntsville, AL. The AIAA

  6. Application of advanced Monte Carlo Methods in numerical dosimetry.

    PubMed

    Reichelt, U; Henniger, J; Lange, C

    2006-01-01

    Many tasks in different sectors of dosimetry are very complex and highly sensitive to changes in the radiation field. Often, only the simulation of radiation transport is capable of describing the radiation field completely. Down to sub-cellular dimensions the energy deposition by cascades of secondary electrons is the main pathway for damage induction in matter. A large number of interactions take place until such electrons are slowed down to thermal energies. Also for some problems of photon transport a large number of photon histories need to be processed. Thus the efficient non-analogue Monte Carlo program, AMOS, has been developed for photon and electron transport. Various applications and benchmarks are presented showing its ability. For radiotherapy purposes the radiation field of a brachytherapy source is calculated according to the American Association of Physicists in Medicine Task Group Report 43 (AAPM/TG43). As additional examples, results for the detector efficiency of a high-purity germanium (HPGe) detector and a dose estimation for an X-ray shielding for radiation protection are shown.

  7. A CTSA agenda to advance methods for comparative effectiveness research.

    PubMed

    Helfand, Mark; Tunis, Sean; Whitlock, Evelyn P; Pauker, Stephen G; Basu, Anirban; Chilingerian, Jon; Harrell, Frank E; Meltzer, David O; Montori, Victor M; Shepard, Donald S; Kent, David M

    2011-06-01

    Clinical research needs to be more useful to patients, clinicians, and other decision makers. To meet this need, more research should focus on patient-centered outcomes, compare viable alternatives, and be responsive to individual patients' preferences, needs, pathobiology, settings, and values. These features, which make comparative effectiveness research (CER) fundamentally patient-centered, challenge researchers to adopt or develop methods that improve the timeliness, relevance, and practical application of clinical studies. In this paper, we describe 10 priority areas that address 3 critical needs for research on patient-centered outcomes (PCOR): (1) developing and testing trustworthy methods to identify and prioritize important questions for research; (2) improving the design, conduct, and analysis of clinical research studies; and (3) linking the process and outcomes of actual practice to priorities for research on patient-centered outcomes. We argue that the National Institutes of Health, through its clinical and translational research program, should accelerate the development and refinement of methods for CER by linking a program of methods research to the broader portfolio of large, prospective clinical and health system studies it supports. Insights generated by this work should be of enormous value to PCORI and to the broad range of organizations that will be funding and implementing CER.

  8. Advanced Methods for the Solution of Differential Equations.

    ERIC Educational Resources Information Center

    Goldstein, Marvin E.; Braun, Willis H.

    This is a textbook, originally developed for scientists and engineers, which stresses the actual solutions of practical problems. Theorems are precisely stated, but the proofs are generally omitted. Sample contents include first-order equations, equations in the complex plane, irregular singular points, and numerical methods. A more recent idea,…

  9. Origins, Methods and Advances in Qualitative Meta-Synthesis

    ERIC Educational Resources Information Center

    Nye, Elizabeth; Melendez-Torres, G. J.; Bonell, Chris

    2016-01-01

    Qualitative research is a broad term encompassing many methods. Critiques of the field of qualitative research argue that while individual studies provide rich descriptions and insights, the absence of connections drawn between studies limits their usefulness. In response, qualitative meta-synthesis serves as a design to interpret and synthesise…

  10. Generalized Weighted Residual Method; Advancements and Current Studies

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer

    2016-10-01

    The Generalized Weighted Residual Method (GWRM) is a time-spectral method for solving initial value partial differential equations. The GWRM treats the temporal, spatial, and parameter domains by projecting the residual to a Chebyshev polynomial space, with the variational principle being that the residual is zero. This treatment provides a global semi-analytical solution. However, straightforward global solution is not economical. One remedy is the inclusion of spatial and temporal sub-domains with coupled internal boundary conditions, which decreases memory requirements and introduces sparse matrices. Only the equations pertaining to the boundary conditions need be solved globally, making the method parallelizable in time. Efficient solution of the linearized ideal MHD stability equations of screw-pinch equilibria are proved possible. The GWRM has also been used to solve strongly nonlinear ODEs such as the Lorenz equations (1984), and is capable of competing with finite time difference schemes in terms of both accuracy and efficiency. GWRM solutions of linear and nonlinear model problems of interest for stability and turbulence modelling will be presented, including detailed comparisons with time stepping methods.

  11. Innovative and Advanced Coupled Neutron Transport and Thermal Hydraulic Method (Tool) for the Design, Analysis and Optimization of VHTR/NGNP Prismatic Reactors

    SciTech Connect

    Rahnema, Farzad; Garimeela, Srinivas; Ougouag, Abderrafi; Zhang, Dingkang

    2013-11-29

    This project will develop a 3D, advanced coarse mesh transport method (COMET-Hex) for steady- state and transient analyses in advanced very high-temperature reactors (VHTRs). The project will lead to a coupled neutronics and thermal hydraulic (T/H) core simulation tool with fuel depletion capability. The computational tool will be developed in hexagonal geometry, based solely on transport theory without (spatial) homogenization in complicated 3D geometries. In addition to the hexagonal geometry extension, collaborators will concurrently develop three additional capabilities to increase the code’s versatility as an advanced and robust core simulator for VHTRs. First, the project team will develop and implement a depletion method within the core simulator. Second, the team will develop an elementary (proof-of-concept) 1D time-dependent transport method for efficient transient analyses. The third capability will be a thermal hydraulic method coupled to the neutronics transport module for VHTRs. Current advancements in reactor core design are pushing VHTRs toward greater core and fuel heterogeneity to pursue higher burn-ups, efficiently transmute used fuel, maximize energy production, and improve plant economics and safety. As a result, an accurate and efficient neutron transport, with capabilities to treat heterogeneous burnable poison effects, is highly desirable for predicting VHTR neutronics performance. This research project’s primary objective is to advance the state of the art for reactor analysis.

  12. Calibration of three rainfall simulators with automatic measurement methods

    NASA Astrophysics Data System (ADS)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  13. A new rapid method of solar simulator calibration

    NASA Technical Reports Server (NTRS)

    Ross, B.

    1976-01-01

    A quick method for checking solar simulator spectra content is presented. The method is based upon a solar cell of extended spectral sensitivity and known absolute response, and a dichroic mirror with the reflection transmission transition close to the peak wavelength of the Thekaekara AMO distribution. It compromises the need for spectral discrimination with the ability to integrate wide spectral regions of the distribution which was considered important due to the spiky nature of the high pressure xenon lamp in common use. The results are expressed in terms of a single number, the blue/red ratio, which, combined with the total (unfiltered) output, provides a simple adequate characterization. Measurements were conducted at eleven major facilities across the country and a total of eighteen simulators were measured including five pulsed units.

  14. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q.

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments relies upon the global momentum conservation of the fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. Numerical examples illustrate the method's application to predicting bulk fluid motion including lateral propellant slosh in low-g conditions.

  15. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments is based upon a novel approach that relies on the global momentum conservation of the closed fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. A numerical example illustrates the method's application to prediction of bulk fluid behavior during a spacecraft ullage settling maneuver.

  16. Methods for variance reduction in Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.

  17. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  18. A modified captive bubble method for determining advancing and receding contact angles

    NASA Astrophysics Data System (ADS)

    Xue, Jian; Shi, Pan; Zhu, Lin; Ding, Jianfu; Chen, Qingmin; Wang, Qingjun

    2014-03-01

    In this work, a modification to the captive bubble method was proposed to test the advancing and receding contact angle. This modification is done by adding a pressure chamber with a pressure control system to the original experimental system equipped with an optical angle mater equipped with a high speed CCD camera, a temperature control system and a computer. A series of samples with highly hydrophilic, hydrophilic, hydrophobic and superhydrophobic surfaces were prepared. The advancing and receding contact angles of these samples with highly hydrophilic, hydrophilic, and hydrophobic surfaces through the new methods was comparable to the result tested by the traditional sessile drop method. It is proved that this method overcomes the limitation of the traditional captive bubble method and the modified captive bubble method allows a smaller error from the test. However, due to the nature of the captive bubble technique, this method is also only suitable for testing the surface with advancing or receding contact angle below 130°.

  19. Advanced finite-difference time-domain techniques for simulation of optical devices with complex material properties and geometric configurations

    NASA Astrophysics Data System (ADS)

    Zhou, Dong

    2005-11-01

    Modeling and simulation play increasingly more important roles in the development and commercialization of optical devices and integrated circuits. The current trend in photonic technologies is to push the level of integration and to utilize materials and structures of increasing complexity. On the other hand, the superb characteristics of free-space and fiber-optics continue to hold strong position to serve a wide range of applications. All these constitute significant challenges for the computer-aided modeling, simulation, and design of such optical devices and systems. The research work in this thesis deals with investigation and development of advanced finite-difference time-domain (FDTD) methods with focus on emerging optical devices and integrated circuits with complex material and/or structural properties. On the material aspects, we consider in a systematic fashion the dispersive and anisotropic characteristics of different materials (i.e., insulators, semiconductors, and conductors) in a broad wavelength range. The Lorentz model is examined and adapted as a general model for treating the material dispersion in the context of FDTD solutions. A dispersive FDTD method based on the multi-term Lorentz dispersive model is developed and employed for the modeling and design of the optical devices. In the FDTD scheme, the perfectly matched layer (PML) boundary condition is extended to the dispersive medium with arbitrary high order Lorentz terms. Finally, a parameter extraction scheme that links the Lorentz model to the experimental results is established. Further, the dispersive FDTD method is then applied to modeling and simulation of magneto-optical (MO) disk system, in combination of the vector diffraction theory. While the former is used for analysis of the interaction of the focused optical field interacting with the conducting materials on the surface of disk, the latter is to simulate the beam propagation from the objective lens to the disk surface. The

  20. Transformation-optics simulation method for stimulated Brillouin scattering

    NASA Astrophysics Data System (ADS)

    Zecca, Roberto; Bowen, Patrick T.; Smith, David R.; Larouche, Stéphane

    2016-12-01

    We develop an approach to enable the full-wave simulation of stimulated Brillouin scattering and related phenomena in a frequency-domain, finite-element environment. The method uses transformation-optics techniques to implement a time-harmonic coordinate transform that reconciles the different frames of reference used by electromagnetic and mechanical finite-element solvers. We show how this strategy can be successfully applied to bulk and guided systems, comparing the results with the predictions of established theory.