Sample records for simulation requires accurate

  1. Conservative properties of finite difference schemes for incompressible flow

    NASA Technical Reports Server (NTRS)

    Morinishi, Youhei

    1995-01-01

    The purpose of this research is to construct accurate finite difference schemes for incompressible unsteady flow simulations such as LES (large-eddy simulation) or DNS (direct numerical simulation). In this report, conservation properties of the continuity, momentum, and kinetic energy equations for incompressible flow are specified as analytical requirements for a proper set of discretized equations. Existing finite difference schemes in staggered grid systems are checked for satisfaction of the requirements. Proper higher order accurate finite difference schemes in a staggered grid system are then proposed. Plane channel flow is simulated using the proposed fourth order accurate finite difference scheme and the results compared with those of the second order accurate Harlow and Welch algorithm.

  2. An implicit higher-order spatially accurate scheme for solving time dependent flows on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tomaro, Robert F.

    1998-07-01

    The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.

  3. Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys

    DTIC Science & Technology

    2012-01-01

    constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on structural components made of high...different temperatures. These model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on...ADDRESS(ES) Naval Surface Warfare Center,4104Evans Way Suite 102,Indian Head,MD,20640 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING

  4. Cognitive task analysis-based design and authoring software for simulation training.

    PubMed

    Munro, Allen; Clark, Richard E

    2013-10-01

    The development of more effective medical simulators requires a collaborative team effort where three kinds of expertise are carefully coordinated: (1) exceptional medical expertise focused on providing complete and accurate information about the medical challenges (i.e., critical skills and knowledge) to be simulated; (2) instructional expertise focused on the design of simulation-based training and assessment methods that produce maximum learning and transfer to patient care; and (3) software development expertise that permits the efficient design and development of the software required to capture expertise, present it in an engaging way, and assess student interactions with the simulator. In this discussion, we describe a method of capturing more complete and accurate medical information for simulators and combine it with new instructional design strategies that emphasize the learning of complex knowledge. Finally, we describe three different types of software support (Development/Authoring, Run Time, and Post Run Time) required at different stages in the development of medical simulations and the instructional design elements of the software required at each stage. We describe the contributions expected of each kind of software and the different instructional control authoring support required. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  5. New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiss, T.; Chaney, L.; Meyer, J.

    Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less

  6. Real-time, haptics-enabled simulator for probing ex vivo liver tissue.

    PubMed

    Lister, Kevin; Gao, Zhan; Desai, Jaydev P

    2009-01-01

    The advent of complex surgical procedures has driven the need for realistic surgical training simulators. Comprehensive simulators that provide realistic visual and haptic feedback during surgical tasks are required to familiarize surgeons with the procedures they are to perform. Complex organ geometry inherent to biological tissues and intricate material properties drive the need for finite element methods to assure accurate tissue displacement and force calculations. Advances in real-time finite element methods have not reached the state where they are applicable to soft tissue surgical simulation. Therefore a real-time, haptics-enabled simulator for probing of soft tissue has been developed which utilizes preprocessed finite element data (derived from accurate constitutive model of the soft-tissue obtained from carefully collected experimental data) to accurately replicate the probing task in real-time.

  7. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  8. Algorithms and architecture for multiprocessor based circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, J.T.

    Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less

  9. On the use of satellite-based estimates of rainfall temporal distribution to simulate the potential for malaria transmission in rural Africa

    NASA Astrophysics Data System (ADS)

    Yamana, Teresa K.; Eltahir, Elfatih A. B.

    2011-02-01

    This paper describes the use of satellite-based estimates of rainfall to force the Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS), a hydrology-based mechanistic model of malaria transmission. We first examined the temporal resolution of rainfall input required by HYDREMATS. Simulations conducted over Banizoumbou village in Niger showed that for reasonably accurate simulation of mosquito populations, the model requires rainfall data with at least 1 h resolution. We then investigated whether HYDREMATS could be effectively forced by satellite-based estimates of rainfall instead of ground-based observations. The Climate Prediction Center morphing technique (CMORPH) precipitation estimates distributed by the National Oceanic and Atmospheric Administration are available at a 30 min temporal resolution and 8 km spatial resolution. We compared mosquito populations simulated by HYDREMATS when the model is forced by adjusted CMORPH estimates and by ground observations. The results demonstrate that adjusted rainfall estimates from satellites can be used with a mechanistic model to accurately simulate the dynamics of mosquito populations.

  10. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  11. Subthreshold SPICE Model Optimization

    NASA Astrophysics Data System (ADS)

    Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy

    2011-04-01

    The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.

  12. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  13. Molecular Simulation of the Free Energy for the Accurate Determination of Phase Transition Properties of Molecular Solids

    NASA Astrophysics Data System (ADS)

    Sellers, Michael; Lisal, Martin; Brennan, John

    2015-06-01

    Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.

  14. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  15. Validation of the solar heating and cooling high speed performance (HISPER) computer code

    NASA Technical Reports Server (NTRS)

    Wallace, D. B.

    1980-01-01

    Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.

  16. Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences.

    PubMed

    Voelz, David G; Roggemann, Michael C

    2009-11-10

    Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.

  17. Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2016-01-01

    Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.

  18. Communication: Adaptive boundaries in multiscale simulations

    NASA Astrophysics Data System (ADS)

    Wagoner, Jason A.; Pande, Vijay S.

    2018-04-01

    Combined-resolution simulations are an effective way to study molecular properties across a range of length and time scales. These simulations can benefit from adaptive boundaries that allow the high-resolution region to adapt (change size and/or shape) as the simulation progresses. The number of degrees of freedom required to accurately represent even a simple molecular process can vary by several orders of magnitude throughout the course of a simulation, and adaptive boundaries react to these changes to include an appropriate but not excessive amount of detail. Here, we derive the Hamiltonian and distribution function for such a molecular simulation. We also design an algorithm that can efficiently sample the boundary as a new coordinate of the system. We apply this framework to a mixed explicit/continuum simulation of a peptide in solvent. We use this example to discuss the conditions necessary for a successful implementation of adaptive boundaries that is both efficient and accurate in reproducing molecular properties.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James; Kuruganti, Teja

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  20. Parallel methodology to capture cyclic variability in motored engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei

    2016-07-28

    Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less

  1. BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs

    NASA Astrophysics Data System (ADS)

    Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes

    2017-06-01

    Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.

  2. A new Scheme for ATLAS Trigger Simulation using Legacy Code

    NASA Astrophysics Data System (ADS)

    Galster, Gorm; Stelzer, Joerg; Wiedenmann, Werner

    2014-06-01

    Analyses at the LHC which search for rare physics processes or determine with high precision Standard Model parameters require accurate simulations of the detector response and the event selection processes. The accurate determination of the trigger response is crucial for the determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, the most recent software releases are usually used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, ideally the same software release that was deployed when the real data were taken should be used. This potentially requires running software dating many years back. Having a strategy for running old software in a modern environment thus becomes essential when data simulated for past years start to present a sizable fraction of the total. We examined the requirements and possibilities for such a simulation scheme within the ATLAS software framework and successfully implemented a proof-of-concept simulation chain. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data format incompatibilities are also likely to emerge in databases and other external support services. Software availability may become an issue, when e.g. the support for the underlying operating system might stop. In this paper we present the encountered problems and developed solutions, and discuss proposals for future development. Some ideas reach beyond the retrospective trigger simulation scheme in ATLAS as they also touch more generally aspects of data preservation.

  3. Parameterized reduced-order models using hyper-dual numbers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less

  4. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    NASA Astrophysics Data System (ADS)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  5. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Time Step Considerations when Simulating Dynamic Behavior of High Performance Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, Paulo Cesar

    2016-09-01

    Building energy simulations, especially those concerning pre-cooling strategies and cooling/heating peak demand management, require careful analysis and detailed understanding of building characteristics. Accurate modeling of the building thermal response and material properties for thermally massive walls or advanced materials like phase change materials (PCMs) are critically important.

  7. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.

    2002-01-01

    The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.

  8. Simulating immersed particle collisions: the Devil's in the details

    NASA Astrophysics Data System (ADS)

    Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart

    2015-11-01

    Simulating densely-packed particle-laden flows with any degree of confidence requires accurate modeling of particle-particle collisions. To this end, we investigate a few collision models from the fluids and granular flow communities using sphere-wall collisions, which have been studied by a number of experimental groups. These collisions involve enough complexities--gravity, particle-wall lubrication forces, particle-wall contact stresses, particle-wake interactions--to challenge any collision model. Evaluating the successes and shortcomings of the collision models, we seek improvements in order to obtain more consistent results. We will highlight several implementation details that are crucial for obtaining accurate results.

  9. Direct Numerical Simulation of a Coolant Jet in a Periodic Crossflow

    NASA Technical Reports Server (NTRS)

    Sharma, Chirdeep; Acharya, Sumanta

    1998-01-01

    A Direct Numerical Simulation of a coolant jet injected normally into a periodic crossflow is presented. The physical situation simulated represents a periodic module in a coolant hole array with a heated crossflow. A collocated finite difference scheme is used which is fifth-order accurate spatially and second-order accurate temporally. The scheme is based on a fractional step approach and requires the solution of a pressure-Poisson equation. The simulations are obtained for a blowing ratio of 0.25 and a channel Reynolds number of 5600. The simulations reveal the dynamics of several large scale structures including the Counter-rotating Vortex Pair (CVP), the horse-shoe vortex, the shear layer vortex, the wall vortex and the wake vortex. The origins and the interactions of these vortical structures are identified and explored. Also presented are the turbulence statistics and how they relate to the flow structures.

  10. Simulation of DKIST solar adaptive optics system

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Carlisle, Elizabeth; Schmidt, Dirk

    2016-07-01

    Solar adaptive optics (AO) simulations are a valuable tool to guide the design and optimization process of current and future solar AO and multi-conjugate AO (MCAO) systems. Solar AO and MCAO systems rely on extended object cross-correlating Shack-Hartmann wavefront sensors to measure the wavefront. Accurate solar AO simulations require computationally intensive operations, which have until recently presented a prohibitive computational cost. We present an update on the status of a solar AO and MCAO simulation tool being developed at the National Solar Observatory. The simulation tool is a multi-threaded application written in the C++ language that takes advantage of current large multi-core CPU computer systems and fast ethernet connections to provide accurate full simulation of solar AO and MCAO systems. It interfaces with KAOS, a state of the art solar AO control software developed by the Kiepenheuer-Institut fuer Sonnenphysik, that provides reliable AO control. We report on the latest results produced by the solar AO simulation tool.

  11. Growing C4 perennial grass for bioenergy using a new Agro-BGC ecosystem model

    NASA Astrophysics Data System (ADS)

    di Vittorio, A. V.; Anderson, R. S.; Miller, N. L.; Running, S. W.

    2009-12-01

    Accurate, spatially gridded estimates of bioenergy crop yields require 1) biophysically accurate crop growth models and 2) careful parameterization of unavailable inputs to these models. To meet the first requirement we have added the capacity to simulate C4 perennial grass as a bioenergy crop to the Biome-BGC ecosystem model. This new model, hereafter referred to as Agro-BGC, includes enzyme driven C4 photosynthesis, individual live and dead leaf, stem, and root carbon/nitrogen pools, separate senescence and litter fall processes, fruit growth, optional annual seeding, flood irrigation, a growing degree day phenology with a killing frost option, and a disturbance handler that effectively simulates fertilization, harvest, fire, and incremental irrigation. There are four Agro-BGC vegetation parameters that are unavailable for Panicum virgatum (switchgrass), and to meet the second requirement we have optimized the model across multiple calibration sites to obtain representative values for these parameters. We have verified simulated switchgrass yields against observations at three non-calibration sites in IL. Agro-BGC simulates switchgrass growth and yield at harvest very well at a single site. Our results suggest that a multi-site optimization scheme would be adequate for producing regional-scale estimates of bioenergy crop yields on high spatial resolution grids.

  12. Incorporation of Fixed Installation Costs into Optimization of Groundwater Remediation with a New Efficient Surrogate Nonlinear Mixed Integer Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Shoemaker, Christine; Wan, Ying

    2016-04-01

    Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).

  13. Time-Spectral Rotorcraft Simulations on Overset Grids

    NASA Technical Reports Server (NTRS)

    Leffell, Joshua I.; Murman, Scott M.; Pulliam, Thomas H.

    2014-01-01

    The Time-Spectral method is derived as a Fourier collocation scheme and applied to NASA's overset Reynolds-averaged Navier-Stokes (RANS) solver OVERFLOW. The paper outlines the Time-Spectral OVERFLOWimplementation. Successful low-speed laminar plunging NACA 0012 airfoil simulations demonstrate the capability of the Time-Spectral method to resolve the highly-vortical wakes typical of more expensive three-dimensional rotorcraft configurations. Dealiasing, in the form of spectral vanishing viscosity (SVV), facilitates the convergence of Time-Spectral calculations of high-frequency flows. Finally, simulations of the isolated V-22 Osprey tiltrotor for both hover and forward (edgewise) flight validate the three-dimensional Time-Spectral OVERFLOW implementation. The Time-Spectral hover simulation matches the time-accurate calculation using a single harmonic. Significantly more temporal modes and SVV are required to accurately compute the forward flight case because of its more active, high-frequency wake.

  14. Development of a station based climate database for SWAT and APEX assessments in the U.S.

    USDA-ARS?s Scientific Manuscript database

    Water quality simulation models such as the Soil and Water Assessment Tool (SWAT) and Agricultural Policy EXtender (APEX) are widely used in the U.S. These models require large amounts of spatial and tabular data to simulate the natural world. Accurate and seamless daily climatic data are critical...

  15. Pathloss Calculation Using the Transmission Line Matrix and Finite Difference Time Domain Methods With Coarse Grids

    DOE PAGES

    Nutaro, James; Kuruganti, Teja

    2017-02-24

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  16. Large eddy simulation modeling of particle-laden flows in complex terrain

    NASA Astrophysics Data System (ADS)

    Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.

    2017-12-01

    The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.

  17. The AAO fiber instrument data simulator

    NASA Astrophysics Data System (ADS)

    Goodwin, Michael; Farrell, Tony; Smedley, Scott; Heald, Ron; Heijmans, Jeroen; De Silva, Gayandhi; Carollo, Daniela

    2012-09-01

    The fiber instrument data simulator is an in-house software tool that simulates detector images of fiber-fed spectrographs developed by the Australian Astronomical Observatory (AAO). In addition to helping validate the instrument designs, the resulting simulated images are used to develop the required data reduction software. Example applications that have benefited from the tool usage are the HERMES and SAMI instrumental projects for the Anglo-Australian Telescope (AAT). Given the sophistication of these projects an end-to-end data simulator that accurately models the predicted detector images is required. The data simulator encompasses all aspects of the transmission and optical aberrations of the light path: from the science object, through the atmosphere, telescope, fibers, spectrograph and finally the camera detectors. The simulator runs under a Linux environment that uses pre-calculated information derived from ZEMAX models and processed data from MATLAB. In this paper, we discuss the aspects of the model, software, example simulations and verification.

  18. METEOROLOGICAL AND TRANSPORT MODELING

    EPA Science Inventory

    Advanced air quality simulation models, such as CMAQ, as well as other transport and dispersion models, require accurate and detailed meteorology fields. These meteorology fields include primary 3-dimensional dynamical and thermodynamical variables (e.g., winds, temperature, mo...

  19. Validation of Solar Sail Simulations for the NASA Solar Sail Demonstration Project

    NASA Technical Reports Server (NTRS)

    Braafladt, Alexander C.; Artusio-Glimpse, Alexandra B.; Heaton, Andrew F.

    2014-01-01

    NASA's Solar Sail Demonstration project partner L'Garde is currently assembling a flight-like sail assembly for a series of ground demonstration tests beginning in 2015. For future missions of this sail that might validate solar sail technology, it is necessary to have an accurate sail thrust model. One of the primary requirements of a proposed potential technology validation mission will be to demonstrate solar sail thrust over a set time period, which for this project is nominally 30 days. This requirement would be met by comparing a L'Garde-developed trajectory simulation to the as-flown trajectory. The current sail simulation baseline for L'Garde is a Systems Tool Kit (STK) plug-in that includes a custom-designed model of the L'Garde sail. The STK simulation has been verified for a flat plate model by comparing it to the NASA-developed Solar Sail Spaceflight Simulation Software (S5). S5 matched STK with a high degree of accuracy and the results of the validation indicate that the L'Garde STK model is accurate enough to meet the potential future mission requirements. Additionally, since the L'Garde sail deviates considerably from a flat plate, a force model for a non-flat sail provided by L'Garde sail was also tested and compared to a flat plate model in S5. This result will be used in the future as a basis of comparison to the non-flat sail model being developed for STK.

  20. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    NASA Astrophysics Data System (ADS)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-11-01

    Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of  ≈1.14 mm for the simulated PET detector and  ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.

  1. Large Eddy Simulation of Turbulent Combustion

    DTIC Science & Technology

    2006-03-15

    described accurately by the skeletal mechanism , usually the major reactants and products, NO and NO2 if we are interested in NOx formation, and any...LARGE EDDY SIMULATION OF TURBULENT COMBUSTION Principle Investigator: Heinz Pitsch Flow Physics and Computation Department of Mechanical Engineering ...are identified. These de- tailed mechanisms are reduced independently for various conditions and accuracy requirements. The skeletal mechanisms form

  2. Direct numerical simulation of transition and turbulence in a spatially evolving boundary layer

    NASA Technical Reports Server (NTRS)

    Rai, Man M.; Moin, Parviz

    1991-01-01

    A high-order-accurate finite-difference approach to direct simulations of transition and turbulence in compressible flows is described. Attention is given to the high-free-stream disturbance case in which transition to turbulence occurs close to the leading edge. In effect, computation requirements are reduced. A method for numerically generating free-stream disturbances is presented.

  3. Simulation of Electric Propulsion Thrusters (Preprint)

    DTIC Science & Technology

    2011-02-07

    activity concerns the plumes produced by electric thrusters. Detailed information on the plumes is required for safe integration of the thruster...ground-based laboratory facilities. Device modelling also plays an important role in plume simulations by providing accurate boundary conditions at...methods used to model the flow of gas and plasma through electric propulsion devices. Discussion of the numerical analysis of other aspects of

  4. SLTCAP: A Simple Method for Calculating the Number of Ions Needed for MD Simulation.

    PubMed

    Schmit, Jeremy D; Kariyawasam, Nilusha L; Needham, Vince; Smith, Paul E

    2018-04-10

    An accurate depiction of electrostatic interactions in molecular dynamics requires the correct number of ions in the simulation box to capture screening effects. However, the number of ions that should be added to the box is seldom given by the bulk salt concentration because a charged biomolecule solute will perturb the local solvent environment. We present a simple method for calculating the number of ions that requires only the total solute charge, solvent volume, and bulk salt concentration as inputs. We show that the most commonly used method for adding salt to a simulation results in an effective salt concentration that is too high. These findings are confirmed using simulations of lysozyme. We have established a web server where these calculations can be readily performed to aid simulation setup.

  5. Audio Taping Simulated Patient Encounters in Community Pharmacy to Enhance the Reliability of Assessments

    PubMed Central

    Werner, Joel Benjamin

    2008-01-01

    Objectives To assess whether audio taping simulated patient interactions can improve the reliability of manually documented data and result in more accurate assessments. Methods Over a 3-month period, 1340 simulated patient visits were made to community pharmacies. Following the encounters, data gathered by the simulated patient were relayed to a coordinator who completed a rating form. Data recorded on the forms were later compared to an audiotape of the interaction. Corrections were tallied and reasons for making them were coded. Results Approximately 10% of cases required corrections, resulting in a 10%-20% modification in the pharmacy's total score. The difference between postcorrection and precorrection scores was significant. Conclusions Audio taping simulated patient visits enhances data integrity. Most corrections were required because of the simulated patients' poor recall abilities. PMID:19325956

  6. NAVSTAR GPS Simulation and Analysis Program (Interim Report)

    DOT National Transportation Integrated Search

    1983-10-01

    This study assesses the capability of the planned NAVSTAR Global Positioning System (GPS) to meet civil navigation requirements. When it becomes operational in about 1983, NAVSTAR GPS will provide accurate two-dimensional and three-dimensional servic...

  7. Fitting neuron models to spike trains.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.

  8. A New Indoor Positioning System Architecture Using GPS Signals.

    PubMed

    Xu, Rui; Chen, Wu; Xu, Ying; Ji, Shengyue

    2015-04-29

    The pseudolite system is a good alternative for indoor positioning systems due to its large coverage area and accurate positioning solution. However, for common Global Positioning System (GPS) receivers, the pseudolite system requires some modifications of the user terminals. To solve the problem, this paper proposes a new pseudolite-based indoor positioning system architecture. The main idea is to receive real-world GPS signals, repeat each satellite signal and transmit those using indoor transmitting antennas. The transmitted GPS-like signal can be processed (signal acquisition and tracking, navigation data decoding) by the general receiver and thus no hardware-level modification on the receiver is required. In addition, all Tx can be synchronized with each other since one single clock is used in Rx/Tx. The proposed system is simulated using a software GPS receiver. The simulation results show the indoor positioning system is able to provide high accurate horizontal positioning in both static and dynamic situations.

  9. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  10. Simulation of wave propagation in three-dimensional random media

    NASA Technical Reports Server (NTRS)

    Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1993-01-01

    Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.

  11. Molecular simulation of the thermophysical properties and phase behaviour of impure CO2 relevant to CCS.

    PubMed

    Cresswell, Alexander J; Wheatley, Richard J; Wilkinson, Richard D; Graham, Richard S

    2016-10-20

    Impurities from the CCS chain can greatly influence the physical properties of CO 2 . This has important design, safety and cost implications for the compression, transport and storage of CO 2 . There is an urgent need to understand and predict the properties of impure CO 2 to assist with CCS implementation. However, CCS presents demanding modelling requirements. A suitable model must both accurately and robustly predict CO 2 phase behaviour over a wide range of temperatures and pressures, and maintain that predictive power for CO 2 mixtures with numerous, mutually interacting chemical species. A promising technique to address this task is molecular simulation. It offers a molecular approach, with foundations in firmly established physical principles, along with the potential to predict the wide range of physical properties required for CCS. The quality of predictions from molecular simulation depends on accurate force-fields to describe the interactions between CO 2 and other molecules. Unfortunately, there is currently no universally applicable method to obtain force-fields suitable for molecular simulation. In this paper we present two methods of obtaining force-fields: the first being semi-empirical and the second using ab initio quantum-chemical calculations. In the first approach we optimise the impurity force-field against measurements of the phase and pressure-volume behaviour of CO 2 binary mixtures with N 2 , O 2 , Ar and H 2 . A gradient-free optimiser allows us to use the simulation itself as the underlying model. This leads to accurate and robust predictions under conditions relevant to CCS. In the second approach we use quantum-chemical calculations to produce ab initio evaluations of the interactions between CO 2 and relevant impurities, taking N 2 as an exemplar. We use a modest number of these calculations to train a machine-learning algorithm, known as a Gaussian process, to describe these data. The resulting model is then able to accurately predict a much broader set of ab initio force-field calculations at comparatively low numerical cost. Although our method is not yet ready to be implemented in a molecular simulation, we outline the necessary steps here. Such simulations have the potential to deliver first-principles simulation of the thermodynamic properties of impure CO 2 , without fitting to experimental data.

  12. Stochastic locality and master-field simulations of very large lattices

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2018-03-01

    In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.

  13. Automated watershed subdivision for simulations using multi-objective optimization

    USDA-ARS?s Scientific Manuscript database

    The development of watershed management plans to evaluate placement of conservation practices typically involves application of watershed models. Incorporating spatially variable watershed characteristics into a model often requires subdividing the watershed into small areas to accurately account f...

  14. Real-Time Simulation for Verification and Validation of Diagnostic and Prognostic Algorithms

    NASA Technical Reports Server (NTRS)

    Aguilar, Robet; Luu, Chuong; Santi, Louis M.; Sowers, T. Shane

    2005-01-01

    To verify that a health management system (HMS) performs as expected, a virtual system simulation capability, including interaction with the associated platform or vehicle, very likely will need to be developed. The rationale for developing this capability is discussed and includes the limited capability to seed faults into the actual target system due to the risk of potential damage to high value hardware. The capability envisioned would accurately reproduce the propagation of a fault or failure as observed by sensors located at strategic locations on and around the target system and would also accurately reproduce the control system and vehicle response. In this way, HMS operation can be exercised over a broad range of conditions to verify that it meets requirements for accurate, timely response to actual faults with adequate margin against false and missed detections. An overview is also presented of a real-time rocket propulsion health management system laboratory which is available for future rocket engine programs. The health management elements and approaches of this lab are directly applicable for future space systems. In this paper the various components are discussed and the general fault detection, diagnosis, isolation and the response (FDIR) concept is presented. Additionally, the complexities of V&V (Verification and Validation) for advanced algorithms and the simulation capabilities required to meet the changing state-of-the-art in HMS are discussed.

  15. Do dichromats see colours in this way? Assessing simulation tools without colorimetric measurements.

    PubMed

    Lillo Jover, Julio A; Álvaro Llorente, Leticia; Moreira Villegas, Humberto; Melnikova, Anna

    2016-11-01

    Simulcheck evaluates Colour Simulation Tools (CSTs, they transform colours to mimic those seen by colour vision deficients). Two CSTs (Variantor and Coblis) were used to know if the standard Simulcheck version (direct measurement based, DMB) can be substituted by another (RGB values based) not requiring sophisticated measurement instruments. Ten normal trichromats performed the two psychophysical tasks included in the Simulcheck method. The Pseudoachromatic Stimuli Identification task provided the h uv (hue angle) values of the pseudoachromatic stimuli: colours seen as red or green by normal trichromats but as grey by colour deficient people. The Minimum Achromatic Contrast task was used to compute the L R (relative luminance) values of the pseudoachromatic stimuli. Simulcheck DMB version showed that Variantor was accurate to simulate protanopia but neither Variantor nor Coblis were accurate to simulate deuteranopia. Simulcheck RGB version provided accurate h uv values, so this variable can be adequately estimated when lacking a colorimeter —an expensive and unusual apparatus—. Contrary, the inaccuracy of the L R estimations provided by Simulcheck RGB version makes it advisable to compute this variable from the measurements performed with a photometer, a cheap and easy to find apparatus.

  16. Simulation of floods caused by overloaded sewer systems: extensions of shallow-water equations

    NASA Astrophysics Data System (ADS)

    Hilden, Michael

    2005-03-01

    The outflow of water from a manhole onto a street is a typical flow problem within the simulation of floods in urban areas that are caused by overloaded sewer systems in the event of heavy rains. The reliable assessment of the flood risk for the connected houses requires accurate simulations of the water flow processes in the sewer system and in the street.The Navier-Stokes equations (NSEs) describe the free surface flow of the fluid water accurately, but since their numerical solution requires high CPU times and much memory, their application is not practical. However, their solutions for selected flow problems are applied as reference states to assess the results of other model approaches.The classical shallow-water equations (SWEs) require only fractions (factor 1/100) of the NSEs' computational effort. They assume hydrostatic pressure distribution, depth-averaged horizontal velocities and neglect vertical velocities. These shallow-water assumptions are not fulfilled for the outflow of water from a manhole onto the street. Accordingly, calculations show differences between NSEs and SWEs solutions.The SWEs are extended in order to assess the flood risks in urban areas reliably within applicable computational efforts. Separating vortex regions from the main flow and approximating vertical velocities to involve their contributions into a pressure correction yield suitable results.

  17. Chemical vapor deposition fluid flow simulation modelling tool

    NASA Technical Reports Server (NTRS)

    Bullister, Edward T.

    1992-01-01

    Accurate numerical simulation of chemical vapor deposition (CVD) processes requires a general purpose computational fluid dynamics package combined with specialized capabilities for high temperature chemistry. In this report, we describe the implementation of these specialized capabilities in the spectral element code NEKTON. The thermal expansion of the gases involved is shown to be accurately approximated by the low Mach number perturbation expansion of the incompressible Navier-Stokes equations. The radiative heat transfer between multiple interacting radiating surfaces is shown to be tractable using the method of Gebhart. The disparate rates of reaction and diffusion in CVD processes are calculated via a point-implicit time integration scheme. We demonstrate the use above capabilities on prototypical CVD applications.

  18. Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics.

    PubMed

    Martínez, Enrique; Cawkwell, Marc J; Voter, Arthur F; Niklasson, Anders M N

    2015-04-21

    Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached at each time step. The thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.

  19. A new lumped-parameter model for flow in unsaturated dual-porosity media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.

    A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less

  20. Ab initio molecular dynamics with nuclear quantum effects at classical cost: Ring polymer contraction for density functional theory.

    PubMed

    Marsalek, Ondrej; Markland, Thomas E

    2016-02-07

    Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding as a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.

  1. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  2. Development of deformable moving lung phantom to simulate respiratory motion in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jina; Lee, Youngkyu; Shin, Hunjoo

    Radiation treatment requires high accuracy to protect healthy organs and destroy the tumor. However, tumors located near the diaphragm constantly move during treatment. Respiration-gated radiotherapy has significant potential for the improvement of the irradiation of tumor sites affected by respiratory motion, such as lung and liver tumors. To measure and minimize the effects of respiratory motion, a realistic deformable phantom is required for use as a gold standard. The purpose of this study was to develop and study the characteristics of a deformable moving lung (DML) phantom, such as simulation, tissue equivalence, and rate of deformation. The rate of changemore » of the lung volume, target deformation, and respiratory signals were measured in this study; they were accurately measured using a realistic deformable phantom. The measured volume difference was 31%, which closely corresponds to the average difference in human respiration, and the target movement was − 30 to + 32 mm. The measured signals accurately described human respiratory signals. This DML phantom would be useful for the estimation of deformable image registration and in respiration-gated radiotherapy. This study shows that the developed DML phantom can exactly simulate the patient's respiratory signal and it acts as a deformable 4-dimensional simulation of a patient's lung with sufficient volume change.« less

  3. A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants

    PubMed Central

    Ewen, James P.; Gattinoni, Chiara; Thakkar, Foram M.; Morgan, Neal; Spikes, Hugh A.; Dini, Daniele

    2016-01-01

    For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n-hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n-hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n-hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed. PMID:28773773

  4. A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants.

    PubMed

    Ewen, James P; Gattinoni, Chiara; Thakkar, Foram M; Morgan, Neal; Spikes, Hugh A; Dini, Daniele

    2016-08-02

    For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n -hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n -hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n -hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed.

  5. Simulating observations with HARMONI: the integral field spectrograph for the European Extremely Large Telescope

    NASA Astrophysics Data System (ADS)

    Zieleniewski, Simon; Thatte, Niranjan; Kendrew, Sarah; Houghton, Ryan; Tecza, Matthias; Clarke, Fraser; Fusco, Thierry; Swinbank, Mark

    2014-07-01

    With the next generation of extremely large telescopes commencing construction, there is an urgent need for detailed quantitative predictions of the scientific observations that these new telescopes will enable. Most of these new telescopes will have adaptive optics fully integrated with the telescope itself, allowing unprecedented spatial resolution combined with enormous sensitivity. However, the adaptive optics point spread function will be strongly wavelength dependent, requiring detailed simulations that accurately model these variations. We have developed a simulation pipeline for the HARMONI integral field spectrograph, a first light instrument for the European Extremely Large Telescope. The simulator takes high-resolution input data-cubes of astrophysical objects and processes them with accurate atmospheric, telescope and instrumental effects, to produce mock observed cubes for chosen observing parameters. The output cubes represent the result of a perfect data reduc- tion process, enabling a detailed analysis and comparison between input and output, showcasing HARMONI's capabilities. The simulations utilise a detailed knowledge of the telescope's wavelength dependent adaptive op- tics point spread function. We discuss the simulation pipeline and present an early example of the pipeline functionality for simulating observations of high redshift galaxies.

  6. Effects of including electrojet turbulence in LFM-RCM simulations of geospace storms

    NASA Astrophysics Data System (ADS)

    Oppenheim, M. M.; Wiltberger, M. J.; Merkin, V. G.; Zhang, B.; Toffoletto, F.; Wang, W.; Lyon, J.; Liu, J.; Dimant, Y. S.

    2016-12-01

    Global geospace system simulations need to incorporate nonlinear and small-scale physical processes in order to accurately model storms and other intense events. During times of strong magnetospheric disturbances, large-amplitude electric fields penetrate from the Earth's magnetosphere to the E-region ionosphere where they drive Farley-Buneman instabilities (FBI) that create small-scale plasma density turbulence. This induces nonlinear currents and leads to anomalous electron heating. Current global Magnetosphere-Ionosphere-Thermosphere (MIT) models disregard these effects by assuming simple laminar ionospheric currents. This paper discusses the effects of incorporating accurate turbulent conductivities into MIT models. Recently, we showed in Liu et al. (2016) that during storm-time, turbulence increases the electron temperatures and conductivities more than precipitation. In this talk, we present the effect of adding these effects to the combined Lyon-Fedder-Mobarry (LFM) global MHD magnetosphere simulator and the Rice Convection Model (RCM). The LFM combines a magnetohydrodynamic (MHD) simulation of the magnetosphere with a 2D electrostatic solution of the ionosphere. The RCM uses drift physics to accurately model the inner magnetosphere, including a storm enhanced ring current. The LFM and coupled LFM-RCM simulations have previously shown unrealistically high cross-polar-cap potentials during strong solar wind driving conditions. We have recently implemented an LFM module that modifies the ionospheric conductivity to account for FBI driven anomalous electron heating and non-linear cross-field current enhancements as a function of the predicted ionospheric electric field. We have also improved the LFM-RCM code by making it capable of handling dipole tilts and asymmetric ionospheric solutions. We have tested this new LFM version by simulating the March 17, 2013 geomagnetic storm. These simulations showed a significant reduction in the cross-polar-cap potential during the strongest driving conditions, significant increases in the ionospheric conductivity in the auroral oval, and better agreement with DMSP observations of sub-auroral polarization streams. We conclude that accurate MIT simulations of geospace storms require the inclusion of turbulent conductivities.

  7. Simulation framework for electromagnetic effects in plasmonics, filter apertures, wafer scattering, grating mirrors, and nano-crystals

    NASA Astrophysics Data System (ADS)

    Ceperley, Daniel Peter

    This thesis presents a Finite-Difference Time-Domain simulation framework as well as both scientific observations and quantitative design data for emerging optical devices. These emerging applications required the development of simulation capabilities to carefully control numerical experimental conditions, isolate and quantifying specific scattering processes, and overcome memory and run-time limitations on large device structures. The framework consists of a new version 7 of TEMPEST and auxiliary tools implemented as Matlab scripts. In improving the geometry representation and absorbing boundary conditions in TEMPEST from v6 the accuracy has been sustained and key improvements have yielded application specific speed and accuracy improvements. These extensions include pulsed methods, PML for plasmon termination, and plasmon and scattered field sources. The auxiliary tools include application specific methods such as signal flow graphs of plasmon couplers, Bloch mode expansions of sub-wavelength grating waves, and back-propagation methods to characterize edge scattering in diffraction masks. Each application posed different numerical hurdles and physical questions for the simulation framework. The Terrestrial Planet Finder Coronagraph required accurate modeling of diffraction mask structures too large for solely FDTD analysis. This analysis was achieved through a combination of targeted TEMPEST simulations and full system simulator based on thin mask scalar diffraction models by Ball Aerospace for JPL. TEMPEST simulation showed that vertical sidewalls were the strongest scatterers, adding nearly 2lambda of light per mask edge, which could be reduced by 20° undercuts. TEMPEST assessment of coupling in rapid thermal annealing was complicated by extremely sub-wavelength features and fine meshes. Near 100% coupling and low variability was confirmed even in the presence of unidirectional dense metal gates. Accurate analysis of surface plasmon coupling efficiency by small surface features required capabilities to isolate these features and cleanly illuminate them with plasmons and plane-waves. These features were shown to have coupling cross-sections up to and slightly exceeding their physical size. Long run-times for TEMPEST simulations of finite length gratings were overcome with a signal flow graph method. With these methods a plasmon coupler with over a 10lambda 100% capture length was demonstrated. Simulation of 3D nano-particle arrays utilized TEMPEST v7's pulsed methods to minimize the number of multi-day simulations. These simulations led to the discovery that interstitial plasmons were responsible for resonant absorption and transmission but not reflection. Simulation of a sub-wavelength grating mirror using pulsed sources to map resonant spectra showed that neither coupled guided waves nor coupled isolated resonators accurately described the operation. However, a new model based on vertical propagation of lateral Bloch modes with zero phase progression efficiently characterized the device and provided principles for designing similar devices at other wavelengths.

  8. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Steve; Holland, Marika

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less

  9. Modifying scoping codes to accurately calculate TMI-cores with lifetimes greater than 500 effective full-power days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, D.; Levine, S.L.; Luoma, J.

    1992-01-01

    The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less

  10. Aircraft Flight Modeling During the Optimization of Gas Turbine Engine Working Process

    NASA Astrophysics Data System (ADS)

    Tkachenko, A. Yu; Kuz'michev, V. S.; Krupenich, I. N.

    2018-01-01

    The article describes a method for simulating the flight of the aircraft along a predetermined path, establishing a functional connection between the parameters of the working process of gas turbine engine and the efficiency criteria of the aircraft. This connection is necessary for solving the optimization tasks of the conceptual design stage of the engine according to the systems approach. Engine thrust level, in turn, influences the operation of aircraft, thus making accurate simulation of the aircraft behavior during flight necessary for obtaining the correct solution. The described mathematical model of aircraft flight provides the functional connection between the airframe characteristics, working process of gas turbine engines (propulsion system), ambient and flight conditions and flight profile features. This model provides accurate results of flight simulation and the resulting aircraft efficiency criteria, required for optimization of working process and control function of a gas turbine engine.

  11. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation.

    PubMed

    Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  12. Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.

    Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less

  13. Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics

    DOE PAGES

    Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.; ...

    2015-04-21

    Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less

  14. Particle kinetic simulation of high altitude hypervelocity flight

    NASA Technical Reports Server (NTRS)

    Boyd, Iain; Haas, Brian L.

    1994-01-01

    Rarefied flows about hypersonic vehicles entering the upper atmosphere or through nozzles expanding into a near vacuum may only be simulated accurately with a direct simulation Monte Carlo (DSMC) method. Under this grant, researchers enhanced the models employed in the DSMC method and performed simulations in support of existing NASA projects or missions. DSMC models were developed and validated for simulating rotational, vibrational, and chemical relaxation in high-temperature flows, including effects of quantized anharmonic oscillators and temperature-dependent relaxation rates. State-of-the-art advancements were made in simulating coupled vibration-dissociation recombination for post-shock flows. Models were also developed to compute vehicle surface temperatures directly in the code rather than requiring isothermal estimates. These codes were instrumental in simulating aerobraking of NASA's Magellan spacecraft during orbital maneuvers to assess heat transfer and aerodynamic properties of the delicate satellite. NASA also depended upon simulations of entry of the Galileo probe into the atmosphere of Jupiter to provide drag and flow field information essential for accurate interpretation of an onboard experiment. Finally, the codes have been used extensively to simulate expanding nozzle flows in low-power thrusters in support of propulsion activities at NASA-Lewis. Detailed comparisons between continuum calculations and DSMC results helped to quantify the limitations of continuum CFD codes in rarefied applications.

  15. Inferring Lower Boundary Driving Conditions Using Vector Magnetic Field Observations

    NASA Technical Reports Server (NTRS)

    Schuck, Peter W.; Linton, Mark; Leake, James; MacNeice, Peter; Allred, Joel

    2012-01-01

    Low-beta coronal MHD simulations of realistic CME events require the detailed specification of the magnetic fields, velocities, densities, temperatures, etc., in the low corona. Presently, the most accurate estimates of solar vector magnetic fields are made in the high-beta photosphere. Several techniques have been developed that provide accurate estimates of the associated photospheric plasma velocities such as the Differential Affine Velocity Estimator for Vector Magnetograms and the Poloidal/Toroidal Decomposition. Nominally, these velocities are consistent with the evolution of the radial magnetic field. To evolve the tangential magnetic field radial gradients must be specified. In addition to estimating the photospheric vector magnetic and velocity fields, a further challenge involves incorporating these fields into an MHD simulation. The simulation boundary must be driven, consistent with the numerical boundary equations, with the goal of accurately reproducing the observed magnetic fields and estimated velocities at some height within the simulation. Even if this goal is achieved, many unanswered questions remain. How can the photospheric magnetic fields and velocities be propagated to the low corona through the transition region? At what cadence must we observe the photosphere to realistically simulate the corona? How do we model the magnetic fields and plasma velocities in the quiet Sun? How sensitive are the solutions to other unknowns that must be specified, such as the global solar magnetic field, and the photospheric temperature and density?

  16. Parallel Multiscale Algorithms for Astrophysical Fluid Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Norman, Michael L.

    1997-01-01

    Our goal is to develop software libraries and applications for astrophysical fluid dynamics simulations in multidimensions that will enable us to resolve the large spatial and temporal variations that inevitably arise due to gravity, fronts and microphysical phenomena. The software must run efficiently on parallel computers and be general enough to allow the incorporation of a wide variety of physics. Cosmological structure formation with realistic gas physics is the primary application driver in this work. Accurate simulations of e.g. galaxy formation require a spatial dynamic range (i.e., ratio of system scale to smallest resolved feature) of 104 or more in three dimensions in arbitrary topologies. We take this as our technical requirement. We have achieved, and in fact, surpassed these goals.

  17. Validity of the Born approximation for beyond Gaussian weak lensing observables

    DOE PAGES

    Petri, Andrea; Haiman, Zoltan; May, Morgan

    2017-06-06

    Accurate forward modeling of weak lensing (WL) observables from cosmological parameters is necessary for upcoming galaxy surveys. Because WL probes structures in the nonlinear regime, analytical forward modeling is very challenging, if not impossible. Numerical simulations of WL features rely on ray tracing through the outputs of N-body simulations, which requires knowledge of the gravitational potential and accurate solvers for light ray trajectories. A less accurate procedure, based on the Born approximation, only requires knowledge of the density field, and can be implemented more efficiently and at a lower computational cost. In this work, we use simulations to show thatmore » deviations of the Born-approximated convergence power spectrum, skewness and kurtosis from their fully ray-traced counterparts are consistent with the smallest nontrivial O(Φ 3) post-Born corrections (so-called geodesic and lens-lens terms). Our results imply a cancellation among the larger O(Φ 4) (and higher order) terms, consistent with previous analytic work. We also find that cosmological parameter bias induced by the Born-approximated power spectrum is negligible even for a LSST-like survey, once galaxy shape noise is considered. When considering higher order statistics such as the κ skewness and kurtosis, however, we find significant bias of up to 2.5σ. Using the LensTools software suite, we show that the Born approximation saves a factor of 4 in computing time with respect to the full ray tracing in reconstructing the convergence.« less

  18. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  19. Validity of the Born approximation for beyond Gaussian weak lensing observables

    NASA Astrophysics Data System (ADS)

    Petri, Andrea; Haiman, Zoltán; May, Morgan

    2017-06-01

    Accurate forward modeling of weak lensing (WL) observables from cosmological parameters is necessary for upcoming galaxy surveys. Because WL probes structures in the nonlinear regime, analytical forward modeling is very challenging, if not impossible. Numerical simulations of WL features rely on ray tracing through the outputs of N -body simulations, which requires knowledge of the gravitational potential and accurate solvers for light ray trajectories. A less accurate procedure, based on the Born approximation, only requires knowledge of the density field, and can be implemented more efficiently and at a lower computational cost. In this work, we use simulations to show that deviations of the Born-approximated convergence power spectrum, skewness and kurtosis from their fully ray-traced counterparts are consistent with the smallest nontrivial O (Φ3) post-Born corrections (so-called geodesic and lens-lens terms). Our results imply a cancellation among the larger O (Φ4) (and higher order) terms, consistent with previous analytic work. We also find that cosmological parameter bias induced by the Born-approximated power spectrum is negligible even for a LSST-like survey, once galaxy shape noise is considered. When considering higher order statistics such as the κ skewness and kurtosis, however, we find significant bias of up to 2.5 σ . Using the LensTools software suite, we show that the Born approximation saves a factor of 4 in computing time with respect to the full ray tracing in reconstructing the convergence.

  20. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  1. Refractive indices of layers and optical simulations of Cu(In,Ga)Se2 solar cells

    PubMed Central

    Avancini, Enrico; Losio, Paolo A.; Figi, Renato; Schreiner, Claudia; Bürki, Melanie; Bourgeois, Emilie; Remes, Zdenek; Nesladek, Milos; Tiwari, Ayodhya N.

    2018-01-01

    Abstract Cu(In,Ga)Se2 based solar cells have reached efficiencies close to 23%. Further knowledge-driven improvements require accurate determination of the material properties. Here, we present refractive indices for all layers in Cu(In,Ga)Se2 solar cells with high efficiency. The optical bandgap of Cu(In,Ga)Se2 does not depend on the Cu content in the explored composition range, while the absorption coefficient value is primarily determined by the Cu content. An expression for the absorption spectrum is proposed, with Ga and Cu compositions as parameters. This set of parameters allows accurate device simulations to understand remaining absorption and carrier collection losses and develop strategies to improve performances. PMID:29785230

  2. Development and experimental validation of computational methods to simulate abnormal thermal and structural environments

    NASA Astrophysics Data System (ADS)

    Moya, J. L.; Skocypec, R. D.; Thomas, R. K.

    1993-09-01

    Over the past 40 years, Sandia National Laboratories (SNL) has been actively engaged in research to improve the ability to accurately predict the response of engineered systems to abnormal thermal and structural environments. These engineered systems contain very hazardous materials. Assessing the degree of safety/risk afforded the public and environment by these engineered systems, therefore, is of upmost importance. The ability to accurately predict the response of these systems to accidents (to abnormal environments) is required to assess the degree of safety. Before the effect of the abnormal environment on these systems can be determined, it is necessary to ascertain the nature of the environment. Ascertaining the nature of the environment, in turn, requires the ability to physically characterize and numerically simulate the abnormal environment. Historically, SNL has demonstrated the level of safety provided by these engineered systems by either of two approaches: a purely regulatory approach, or by a probabilistic risk assessment (PRA). This paper will address the latter of the two approaches.

  3. A Probabilistic Model of Local Sequence Alignment That Simplifies Statistical Significance Estimation

    PubMed Central

    Eddy, Sean R.

    2008-01-01

    Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236

  4. Fitting Neuron Models to Spike Trains

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  5. Application of CT-PSF-based computer-simulated lung nodules for evaluating the accuracy of computer-aided volumetry.

    PubMed

    Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji

    2012-07-01

    With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.

  6. Successes and Challenges of Incompressible Flow Simulation

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin

    2003-01-01

    During the past thirty years, numerical methods and simulation tools for incompressible flows have been advanced as a subset of CFD discipline. Even though incompressible flows are encountered in many areas of engineering, simulation of compressible flow has been the major driver for developing computational algorithms and tools. This is probably due to rather stringent requirements for predicting aerodynamic performance characteristics of flight vehicles, while flow devices involving low speed or incompressible flow could be reasonably well designed without resorting to accurate numerical simulations. As flow devices are required to be more sophisticated and highly efficient, CFD tools become indispensable in fluid engineering for incompressible and low speed flow. This paper is intended to review some of the successes made possible by advances in computational technologies during the same period, and discuss some of the current challenges.

  7. How many molecules are required to measure a cyclic voltammogram?

    NASA Astrophysics Data System (ADS)

    Cutress, Ian J.; Compton, Richard G.

    2011-05-01

    The stochastic limit at which fully-reversible cyclic voltammetry can accurately be measured is investigated. Specifically, Monte Carlo GPU simulation is used to study low concentration cyclic voltammetry at a microdisk electrode over a range of scan rates and concentrations, and the results compared to the statistical limit as predicted by finite difference simulation based on Fick's Laws of Diffusion. Both Butler-Volmer and Marcus-Hush electrode kinetics are considered, simulated via random-walk methods, and shown to give identical results in the fast kinetic limit.

  8. An innovative exercise method to simulate orbital EVA work - Applications to PLSS automatic controls

    NASA Technical Reports Server (NTRS)

    Lantz, Renee; Vykukal, H.; Webbon, Bruce

    1987-01-01

    An exercise method has been proposed which may satisfy the current need for a laboratory simulation representative of muscular, cardiovascular, respiratory, and thermoregulatory responses to work during orbital extravehicular activity (EVA). The simulation incorporates arm crank ergometry with a unique body support mechanism that allows all body position stabilization forces to be reacted at the feet. By instituting this exercise method in laboratory experimentation, an advanced portable life support system (PLSS) thermoregulatory control system can be designed to more accurately reflect the specific work requirements of orbital EVA.

  9. Parametric model of human body shape and ligaments for patient-specific epidural simulation.

    PubMed

    Vaughan, Neil; Dubey, Venketesh N; Wee, Michael Y K; Isaacs, Richard

    2014-10-01

    This work is to build upon the concept of matching a person's weight, height and age to their overall body shape to create an adjustable three-dimensional model. A versatile and accurate predictor of body size and shape and ligament thickness is required to improve simulation for medical procedures. A model which is adjustable for any size, shape, body mass, age or height would provide ability to simulate procedures on patients of various body compositions. Three methods are provided for estimating body circumferences and ligament thicknesses for each patient. The first method is using empirical relations from body shape and size. The second method is to load a dataset from a magnetic resonance imaging (MRI) scan or ultrasound scan containing accurate ligament measurements. The third method is a developed artificial neural network (ANN) which uses MRI dataset as a training set and improves accuracy using error back-propagation, which learns to increase accuracy as more patient data is added. The ANN is trained and tested with clinical data from 23,088 patients. The ANN can predict subscapular skinfold thickness within 3.54 mm, waist circumference 3.92 cm, thigh circumference 2.00 cm, arm circumference 1.21 cm, calf circumference 1.40 cm, triceps skinfold thickness 3.43 mm. Alternative regression analysis method gave overall slightly less accurate predictions for subscapular skinfold thickness within 3.75 mm, waist circumference 3.84 cm, thigh circumference 2.16 cm, arm circumference 1.34 cm, calf circumference 1.46 cm, triceps skinfold thickness 3.89 mm. These calculations are used to display a 3D graphics model of the patient's body shape using OpenGL and adjusted by 3D mesh deformations. A patient-specific epidural simulator is presented using the developed body shape model, able to simulate needle insertion procedures on a 3D model of any patient size and shape. The developed ANN gave the most accurate results for body shape, size and ligament thickness. The resulting simulator offers the experience of simulating needle insertions accurately whilst allowing for variation in patient body mass, height or age. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Numerical Simulations of Hypersonic Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Bartkowicz, Matthew David

    Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.

  11. PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences.

    PubMed

    Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong; Warnow, Tandy

    2015-05-01

    We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate--slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory.

  12. Ab initio molecular dynamics with nuclear quantum effects at classical cost: Ring polymer contraction for density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsalek, Ondrej; Markland, Thomas E., E-mail: tmarkland@stanford.edu

    Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding asmore » a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.« less

  13. Development of Improved Models, Stochasticity, and Frameworks for the MIT Extensible Air Network Simulation

    NASA Technical Reports Server (NTRS)

    Clarke, John-Paul

    2004-01-01

    MEANS, the MIT Extensible Air Network Simulation, was created in February of 2001, and has been developed with support from NASA Ames since August of 2001. MEANS is a simulation tool which is designed to maximize fidelity without requiring data of such a low level as to preclude easy examination of alternative scenarios. To this end, MEANS is structured in a modular fashion to allow more detailed components to be brought in when desired, and left out when they would only be an impediment. Traditionally, one of the difficulties with high-fidelity models is that they require a level of detail in their data that is difficult to obtain. For analysis of past scenarios, the required data may not have been collected, or may be considered proprietary and thus difficult for independent researchers to obtain. For hypothetical scenarios, generation of the data is sufficiently difficult to be a task in and of itself. Often, simulations designed by a researcher will model exactly one element of the problem well and in detail, while assuming away other parts of the problem which are not of interest or for which data is not available. While these models are useful for working with the task at hand, they are very often not applicable to future problems. The MEAN Simulation attempts to address these problems by using a modular design which provides components of varying fidelity for each aspect of the simulation. This allows for the most accurate model for which data is available to be used. It also provides for easy analysis of sensitivity to data accuracy. This can be particularly useful in the case where accurate data is available for some subset of the situations that are to be considered. Furthermore, the ability to use the same model while examining effects on different parts of a system reduces the time spent learning the simulation, and provides for easier comparisons between changes to different parts of the system.

  14. Novel high-fidelity realistic explosion damage simulation for urban environments

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya

    2010-04-01

    Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.

  15. Comparison of Varied Precipitation and Soil Data Types for Use in Watershed Modeling.

    EPA Science Inventory

    The accuracy of water quality and quantity models depends on calibration to ensure reliable simulations of streamflow, which in turn requires accurate climatic forcing data. Precipitation is widely acknowledged to be the largest source of uncertainty in watershed modeling, and so...

  16. COMPARISON OF MEASURED AND MODELED SURFACE FLUXES OF HEAT, MOISTURE, AND CHEMICAL DRY DEPOSITION

    EPA Science Inventory

    Realistic air quality modeling requires accurate simulation of both meteorological and chemical processes within the planetary boundary layer (PBL). n vegetated areas, the primary pathway for surface fluxes of moisture as well a many gaseous chemicals is through vegetative transp...

  17. Models to teach lung sonopathology and ultrasound-guided thoracentesis.

    PubMed

    Wojtczak, Jacek A

    2014-12-01

    Lung sonography allows rapid diagnosis of lung emergencies such as pulmonary edema, hemothorax or pneumothorax. The ability to timely diagnose an intraoperative pneumothorax is an important skill for the anesthesiologist. However, lung ultrasound exams require an interpretation of not only real images but also complex acoustic artifacts such as A-lines and B-lines. Therefore, appropriate training to gain proficiency is important. Simulated environment using ultrasound phantom models allows controlled, supervised learning. We have developed hybrid models that combine dry or wet polyurethane foams, porcine rib cages and human hand simulating a rib cage. These models simulate fairly accurately pulmonary sonopathology and allow supervised teaching of lung sonography with the immediate feedback. In-vitro models can also facilitate learning of procedural skills, improving transducer and needle positioning and movement, rapid recognition of thoracic anatomy and hand - eye coordination skills. We described a new model to teach an ultrasound guided thoracentesis. This model consists of the experimenter's hand placed on top of the water-filled container with a wet foam. Metacarpal bones of the human hand simulate a rib cage and a wet foam simulates a diseased lung immersed in the pleural fluid. Positive fluid flow offers users feedback when a simulated pleural effusion is accurately assessed.

  18. Climate specific thermomechanical fatigue of flat plate photovoltaic module solder joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosco, Nick; Silverman, Timothy J.; Kurtz, Sarah

    FEM simulations of PbSn solder fatigue damage are used to evaluate seven cities that represent a variety of climatic zones. It is shown that the rate of solder fatigue damage is not ranked with the cities' climate designations. For an accurate ranking, the mean maximum daily temperature, daily temperature change and a characteristic of clouding events are all required. A physics-based empirical equation is presented that accurately calculates solder fatigue damage according to these three factors. An FEM comparison of solder damage accumulated through service and thermal cycling demonstrates the number of cycles required for an equivalent exposure. For anmore » equivalent 25-year exposure, the number of thermal cycles (-40 degrees C to 85 degrees C) required ranged from roughly 100 to 630 for the cities examined. It is demonstrated that increasing the maximum cycle temperature may significantly reduce the number of thermal cycles required for an equivalent exposure.« less

  19. Atmospheric density models

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An atmospheric model developed by Jacchia, quite accurate but requiring a large amount of computer storage and execution time, was found to be ill-suited for the space shuttle onboard program. The development of a simple atmospheric density model to simulate the Jacchia model was studied. Required characteristics including variation with solar activity, diurnal variation, variation with geomagnetic activity, semiannual variation, and variation with height were met by the new atmospheric density model.

  20. Ground-water/surface-water responses to global climate simulations, Santa Clara-Calleguas basin, Ventura County, California, 1950-93

    USGS Publications Warehouse

    Hanson, Randall T.; Dettinger, Michael D.

    2005-01-01

    Climate variations can play an important, if not always crucial, role in successful conjunctive management of ground water and surface water resources. This will require accurate accounting of the links between variations in climate, recharge, and withdrawal from the resource systems, accurate projection or predictions of the climate variations, and accurate simulation of the responses of the resource systems. To assess linkages and predictability of climate influences on conjunctive management, global climate model (GCM) simulated precipitation rates were used to estimate inflows and outflows from a regional ground water model (RGWM) of the coastal aquifers of the Santa Clara-Calleguas Basin at Ventura, California, for 1950 to 1993. Interannual to interdecadal time scales of the El Niño Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO) climate variations are imparted to simulated precipitation variations in the Southern California area and are realistically imparted to the simulated ground water level variations through the climate-driven recharge (and discharge) variations. For example, the simulated average ground water level response at a key observation well in the basin to ENSO variations of tropical Pacific sea surface temperatures is 1.2 m/°C, compared to 0.9 m/°C in observations. This close agreement shows that the GCM-RGWM combination can translate global scale climate variations into realistic local ground water responses. Probability distributions of simulated ground water level excursions above a local water level threshold for potential seawater intrusion compare well to the corresponding distributions from observations and historical RGWM simulations, demonstrating the combination's potential usefulness for water management and planning. Thus the GCM-RGWM combination could be used for planning purposes and — when the GCM forecast skills are adequate — for near term predictions.

  1. Ground water/surface water responses to global climate simulations, Santa Clara-Calleguas Basin, Ventura, California

    USGS Publications Warehouse

    Hanson, R.T.; Dettinger, M.D.

    2005-01-01

    Climate variations can play an important, if not always crucial, role in successful conjunctive management of ground water and surface water resources. This will require accurate accounting of the links between variations in climate, recharge, and withdrawal from the resource systems, accurate projection or predictions of the climate variations, and accurate simulation of the responses of the resource systems. To assess linkages and predictability of climate influences on conjunctive management, global climate model (GCM) simulated precipitation rates were used to estimate inflows and outflows from a regional ground water model (RGWM) of the coastal aquifers of the Santa ClaraCalleguas Basin at Ventura, California, for 1950 to 1993. Interannual to interdecadal time scales of the El Nin??o Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO) climate variations are imparted to simulated precipitation variations in the Southern California area and are realistically imparted to the simulated ground water level variations through the climate-driven recharge (and discharge) variations. For example, the simulated average ground water level response at a key observation well in the basin to ENSO variations of tropical Pacific sea surface temperatures is 1.2 m/??C, compared to 0.9 m/??C in observations. This close agreement shows that the GCM-RGWM combination can translate global scale climate variations into realistic local ground water responses. Probability distributions of simulated ground water level excursions above a local water level threshold for potential seawater intrusion compare well to the corresponding distributions from observations and historical RGWM simulations, demonstrating the combination's potential usefulness for water management and planning. Thus the GCM-RGWM combination could be used for planning purposes and - when the GCM forecast skills are adequate - for near term predictions.

  2. Flight Validation of Mars Mission Technologies

    NASA Technical Reports Server (NTRS)

    Eberspeaker, P. J.

    2000-01-01

    Effective exploration and characterization of Mars will require the deployment of numerous surface probes, tethered balloon stations and free-flying balloon systems as well as larger landers and orbiting satellite systems. Since launch opportunities exist approximately every two years it is extremely critical that each and every mission maximize its potential for success. This will require significant testing of each system in an environment that simulates the actual operational environment as closely as possible. Analytical techniques and laboratory testing goes a long way in mitigating the inherent risks associated with space exploration, however they fall sort of accurately simulating the unpredictable operational environment in which these systems must function.

  3. Z-scan theoretical and experimental studies for accurate measurements of the nonlinear refractive index and absorption of optical glasses near damage threshold

    NASA Astrophysics Data System (ADS)

    Olivier, Thomas; Billard, Franck; Akhouayri, Hassan

    2004-06-01

    Self-focusing is one of the dramatic phenomena that may occur during the propagation of a high power laser beam in a nonlinear material. This phenomenon leads to a degradation of the wave front and may also lead to a photoinduced damage of the material. Realistic simulations of the propagation of high power laser beams require an accurate knowledge of the nonlinear refractive index γ. In the particular case of fused silica and in the nanosecond regime, it seems that electronic mechanisms as well as electrostriction and thermal effects can lead to a significant refractive index variation. Compared to the different methods used to measure this parmeter, the Z-scan method is simple, offers a good sensitivity and may give absolute measurements if the incident beam is accurately studied. However, this method requires a very good knowledge of the incident beam and of its propagation inside a nonlinear sample. We used a split-step propagation algorithm to simlate Z-scan curves for arbitrary beam shape, sample thickness and nonlinear phase shift. According to our simulations and a rigorous analysis of the Z-scan measured signal, it appears that some abusive approximations lead to very important errors. Thus, by reducing possible errors on the interpretation of Z-scan experimental studies, we performed accurate measurements of the nonlinear refractive index of fused silica that show the significant contribution of nanosecond mechanisms.

  4. Integrated Medical Model (IMM) 4.0 Enhanced Functionalities

    NASA Technical Reports Server (NTRS)

    Young, M.; Keenan, A. B.; Saile, L.; Boley, L. A.; Walton, M. E.; Shah, R. V.; Kerstman, E. L.; Myers, J. G.

    2015-01-01

    The Integrated Medical Model is a probabilistic simulation model that uses input data on 100 medical conditions to simulate expected medical events, the resources required to treat, and the resulting impact to the mission for specific crew and mission characteristics. The newest development version of IMM, IMM v4.0, adds capabilities that remove some of the conservative assumptions that underlie the current operational version, IMM v3. While IMM v3 provides the framework to simulate whether a medical event occurred, IMMv4 also simulates when the event occurred during a mission timeline. This allows for more accurate estimation of mission time lost and resource utilization. In addition to the mission timeline, IMMv4.0 features two enhancements that address IMM v3 assumptions regarding medical event treatment. Medical events in IMMv3 are assigned the untreated outcome if any resource required to treat the event was unavailable. IMMv4 allows for partially treated outcomes that are proportional to the amount of required resources available, thus removing the dichotomous treatment assumption. An additional capability IMMv4 is to use an alternative medical resource when the primary resource assigned to the condition is depleted, more accurately reflecting the real-world system. The additional capabilities defining IMM v4.0the mission timeline, partial treatment, and alternate drug result in more realistic predicted mission outcomes. The primary model outcomes of IMM v4.0 for the ISS6 mission, including mission time lost, probability of evacuation, and probability of loss of crew life, are be compared to those produced by the current operational version of IMM to showcase enhanced prediction capabilities.

  5. Predictions of Transient Flame Lift-Off Length With Comparison to Single-Cylinder Optical Engine Experiments

    DOE PAGES

    Senecal, P. K.; Pomraning, E.; Anders, J. W.; ...

    2014-05-28

    A state-of-the-art, grid-convergent simulation methodology was applied to three-dimensional calculations of a single-cylinder optical engine. A mesh resolution study on a sector-based version of the engine geometry further verified the RANS-based cell size recommendations previously presented by Senecal et al. (“Grid Convergent Spray Models for Internal Combustion Engine CFD Simulations,” ASME Paper No. ICEF2012-92043). Convergence of cylinder pressure, flame lift-off length, and emissions was achieved for an adaptive mesh refinement cell size of 0.35 mm. Furthermore, full geometry simulations, using mesh settings derived from the grid convergence study, resulted in excellent agreement with measurements of cylinder pressure, heat release rate,more » and NOx emissions. On the other hand, the full geometry simulations indicated that the flame lift-off length is not converged at 0.35 mm for jets not aligned with the computational mesh. Further simulations suggested that the flame lift-off lengths for both the nonaligned and aligned jets appear to be converged at 0.175 mm. With this increased mesh resolution, both the trends and magnitudes in flame lift-off length were well predicted with the current simulation methodology. Good agreement between the overall predicted flame behavior and the available chemiluminescence measurements was also achieved. Our present study indicates that cell size requirements for accurate prediction of full geometry flame lift-off lengths may be stricter than those for global combustion behavior. This may be important when accurate soot predictions are required.« less

  6. Predictions of Transient Flame Lift-Off Length With Comparison to Single-Cylinder Optical Engine Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senecal, P. K.; Pomraning, E.; Anders, J. W.

    A state-of-the-art, grid-convergent simulation methodology was applied to three-dimensional calculations of a single-cylinder optical engine. A mesh resolution study on a sector-based version of the engine geometry further verified the RANS-based cell size recommendations previously presented by Senecal et al. (“Grid Convergent Spray Models for Internal Combustion Engine CFD Simulations,” ASME Paper No. ICEF2012-92043). Convergence of cylinder pressure, flame lift-off length, and emissions was achieved for an adaptive mesh refinement cell size of 0.35 mm. Furthermore, full geometry simulations, using mesh settings derived from the grid convergence study, resulted in excellent agreement with measurements of cylinder pressure, heat release rate,more » and NOx emissions. On the other hand, the full geometry simulations indicated that the flame lift-off length is not converged at 0.35 mm for jets not aligned with the computational mesh. Further simulations suggested that the flame lift-off lengths for both the nonaligned and aligned jets appear to be converged at 0.175 mm. With this increased mesh resolution, both the trends and magnitudes in flame lift-off length were well predicted with the current simulation methodology. Good agreement between the overall predicted flame behavior and the available chemiluminescence measurements was also achieved. Our present study indicates that cell size requirements for accurate prediction of full geometry flame lift-off lengths may be stricter than those for global combustion behavior. This may be important when accurate soot predictions are required.« less

  7. Initial conditions for accurate N-body simulations of massive neutrino cosmologies

    NASA Astrophysics Data System (ADS)

    Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.

    2017-04-01

    The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.

    Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3

  9. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  10. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  11. Power Control and Monitoring Requirements for Thermal Vacuum/Thermal Balance Testing of the MAP Observatory

    NASA Technical Reports Server (NTRS)

    Johnson, Chris; Hinkle, R. Kenneth (Technical Monitor)

    2002-01-01

    The specific heater control requirements for the thermal vacuum and thermal balance testing of the Microwave Anisotropy Probe (MAP) Observatory at the Goddard Space Flight Center (GSFC) in Greenbelt, Maryland are described. The testing was conducted in the 10m wide x 18.3m high Space Environment Simulator (SES) Thermal Vacuum Facility. The MAP thermal testing required accurate quantification of spacecraft and fixture power levels while minimizing heater electrical emissions. The special requirements of the MAP test necessitated construction of five (5) new heater racks.

  12. Satellite Communication Hardware Emulation System (SCHES)

    NASA Technical Reports Server (NTRS)

    Kaplan, Ted

    1993-01-01

    Satellite Communication Hardware Emulator System (SCHES) is a powerful simulator that emulates the hardware used in TDRSS links. SCHES is a true bit-by-bit simulator that models communications hardware accurately enough to be used as a verification mechanism for actual hardware tests on user spacecraft. As a credit to its modular design, SCHES is easily configurable to model any user satellite communication link, though some development may be required to tailor existing software to user specific hardware.

  13. ASSESSING CHILDREN'S EXPOSURES TO PESTICIDES: AN IMPORTANT APPLICATION OF THE STOCHASTIC HUMAN EXPOSURE AND DOSE SIMULATION MODEL (SHEDS)

    EPA Science Inventory

    Accurately quantifying human exposures and doses of various populations to environmental pollutants is critical for the Agency to assess and manage human health risks. For example, the Food Quality Protection Act of 1996 (FQPA) requires EPA to consider aggregate human exposure ...

  14. Inertial Sensor Assisted Acquisition, Tracking, and Pointing for High Data Rate Free Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Lee, Shinhak; Ortiz, Gerry G.

    2003-01-01

    We discuss use of inertial sensors to facilitate deep space optical communications. Implementation of this concept requires accurate and wide bandwidth inertial sensors. In this presentation, the principal concept and algorithm using linear accelerometers will be given along with the simulation and experimental results.

  15. Modeling Cometary Coma with a Three Dimensional, Anisotropic Multiple Scattering Distributed Processing Code

    NASA Technical Reports Server (NTRS)

    Luchini, Chris B.

    1997-01-01

    Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.

  16. Development of cropland management dataset to support U.S. SWAT assessments

    USDA-ARS?s Scientific Manuscript database

    The Soil and Water Assessment Tool (SWAT) is a widely used hydrologic/water quality simulation model in the U.S. Process-based models like SWAT require a great deal of data to accurately represent the natural world, including topography, landuse, soils, weather, and management. With the exception ...

  17. Color visual simulation applications at the Defense Mapping Agency

    NASA Astrophysics Data System (ADS)

    Simley, J. D.

    1984-09-01

    The Defense Mapping Agency (DMA) produces the Digital Landmass System data base to provide culture and terrain data in support of numerous aircraft simulators. In order to conduct data base and simulation quality control and requirements analysis, DMA has developed the Sensor Image Simulator which can rapidly generate visual and radar static scene digital simulations. The use of color in visual simulation allows the clear portrayal of both landcover and terrain data, whereas the initial black and white capabilities were restricted in this role and thus found limited use. Color visual simulation has many uses in analysis to help determine the applicability of current and prototype data structures to better meet user requirements. Color visual simulation is also significant in quality control since anomalies can be more easily detected in natural appearing forms of the data. The realism and efficiency possible with advanced processing and display technology, along with accurate data, make color visual simulation a highly effective medium in the presentation of geographic information. As a result, digital visual simulation is finding increased potential as a special purpose cartographic product. These applications are discussed and related simulation examples are presented.

  18. Unbiased simulation of near-Clifford quantum circuits

    DOE PAGES

    Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.; ...

    2017-06-28

    Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3

  19. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  20. An experimental comparison of the Marcus-Hush and Butler-Volmer descriptions of electrode kinetics applied to cyclic voltammetry. The one electron reductions of europium (III) and 2-methyl-2-nitropropane studied at a mercury microhemisphere electrode

    NASA Astrophysics Data System (ADS)

    Henstridge, Martin C.; Wang, Yijun; Limon-Petersen, Juan G.; Laborda, Eduardo; Compton, Richard G.

    2011-11-01

    We present a comparative experimental evaluation of the Butler-Volmer and Marcus-Hush models using cyclic voltammetry at a microelectrode. Numerical simulations are used to fit experimental voltammetry of the one electron reductions of europium (III) and 2-methyl-2-nitropropane, in water and acetonitrile, respectively, at a mercury microhemisphere electrode. For Eu (III) very accurate fits to experiment were obtained over a wide range of scan rates using Butler-Volmer kinetics, whereas the Marcus-Hush model was less accurate. The reduction of 2-methyl-2-nitropropane was well simulated by both models, however Marcus-Hush required a reorganisation energy lower than expected.

  1. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  2. Low-dimensional, morphologically accurate models of subthreshold membrane potential

    PubMed Central

    Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.

    2009-01-01

    The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386

  3. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  4. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-10-20

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min.

  5. Fast skin dose estimation system for interventional radiology

    PubMed Central

    Takata, Takeshi; Kotoku, Jun’ichi; Maejima, Hideyuki; Kumagai, Shinobu; Arai, Norikazu; Kobayashi, Takenori; Shiraishi, Kenshiro; Yamamoto, Masayoshi; Kondo, Hiroshi; Furui, Shigeru

    2018-01-01

    Abstract To minimise the radiation dermatitis related to interventional radiology (IR), rapid and accurate dose estimation has been sought for all procedures. We propose a technique for estimating the patient skin dose rapidly and accurately using Monte Carlo (MC) simulation with a graphical processing unit (GPU, GTX 1080; Nvidia Corp.). The skin dose distribution is simulated based on an individual patient’s computed tomography (CT) dataset for fluoroscopic conditions after the CT dataset has been segmented into air, water and bone based on pixel values. The skin is assumed to be one layer at the outer surface of the body. Fluoroscopic conditions are obtained from a log file of a fluoroscopic examination. Estimating the absorbed skin dose distribution requires calibration of the dose simulated by our system. For this purpose, a linear function was used to approximate the relation between the simulated dose and the measured dose using radiophotoluminescence (RPL) glass dosimeters in a water-equivalent phantom. Differences of maximum skin dose between our system and the Particle and Heavy Ion Transport code System (PHITS) were as high as 6.1%. The relative statistical error (2 σ) for the simulated dose obtained using our system was ≤3.5%. Using a GPU, the simulation on the chest CT dataset aiming at the heart was within 3.49 s on average: the GPU is 122 times faster than a CPU (Core i7–7700K; Intel Corp.). Our system (using the GPU, the log file, and the CT dataset) estimated the skin dose more rapidly and more accurately than conventional methods. PMID:29136194

  6. Fast skin dose estimation system for interventional radiology.

    PubMed

    Takata, Takeshi; Kotoku, Jun'ichi; Maejima, Hideyuki; Kumagai, Shinobu; Arai, Norikazu; Kobayashi, Takenori; Shiraishi, Kenshiro; Yamamoto, Masayoshi; Kondo, Hiroshi; Furui, Shigeru

    2018-03-01

    To minimise the radiation dermatitis related to interventional radiology (IR), rapid and accurate dose estimation has been sought for all procedures. We propose a technique for estimating the patient skin dose rapidly and accurately using Monte Carlo (MC) simulation with a graphical processing unit (GPU, GTX 1080; Nvidia Corp.). The skin dose distribution is simulated based on an individual patient's computed tomography (CT) dataset for fluoroscopic conditions after the CT dataset has been segmented into air, water and bone based on pixel values. The skin is assumed to be one layer at the outer surface of the body. Fluoroscopic conditions are obtained from a log file of a fluoroscopic examination. Estimating the absorbed skin dose distribution requires calibration of the dose simulated by our system. For this purpose, a linear function was used to approximate the relation between the simulated dose and the measured dose using radiophotoluminescence (RPL) glass dosimeters in a water-equivalent phantom. Differences of maximum skin dose between our system and the Particle and Heavy Ion Transport code System (PHITS) were as high as 6.1%. The relative statistical error (2 σ) for the simulated dose obtained using our system was ≤3.5%. Using a GPU, the simulation on the chest CT dataset aiming at the heart was within 3.49 s on average: the GPU is 122 times faster than a CPU (Core i7-7700K; Intel Corp.). Our system (using the GPU, the log file, and the CT dataset) estimated the skin dose more rapidly and more accurately than conventional methods.

  7. Automatic CT simulation optimization for radiation therapy: A general strategy.

    PubMed

    Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa

    2014-03-01

    In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.

  8. Boosting flood warning schemes with fast emulator of detailed hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Bellos, V.; Carbajal, J. P.; Leitao, J. P.

    2017-12-01

    Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real world scenario.

  9. Error analysis of numerical gravitational waveforms from coalescing binary black holes

    NASA Astrophysics Data System (ADS)

    Fong, Heather; Chu, Tony; Kumar, Prayush; Pfeiffer, Harald; Boyle, Michael; Hemberger, Daniel; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela; SXS Collaboration

    2016-03-01

    The Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO) has finished a successful first observation run and will commence its second run this summer. Detection of compact object binaries utilizes matched-filtering, which requires a vast collection of highly accurate gravitational waveforms. This talk will present a set of about 100 new aligned-spin binary black hole simulations. I will discuss their properties, including a detailed error analysis, which demonstrates that the numerical waveforms are sufficiently accurate for gravitational wave detection purposes, as well as for parameter estimation purposes.

  10. The calibration of a model for simulating the thermal and electrical performance of a 2.8 kW AC solid-oxide fuel cell micro-cogeneration device

    NASA Astrophysics Data System (ADS)

    Beausoleil-Morrison, Ian; Lombardi, Kathleen

    The concurrent production of heat and electricity within residential buildings using solid-oxide fuel cell (SOFC) micro-cogeneration devices has the potential to reduce primary energy consumption, greenhouse gas emissions, and air pollutants. A realistic assessment of this emerging technology requires the accurate simulation of the thermal and electrical production of SOFC micro-cogeneration devices concurrent with the simulation of the building, its occupants, and coupled plant components. The calibration of such a model using empirical data gathered from experiments conducted with a 2.8 kW AC SOFC micro-cogeneration device is demonstrated. The experimental configuration, types of instrumentation employed, and the operating scenarios examined are treated. The propagation of measurement uncertainty into the derived quantities that are necessary for model calibration are demonstrated by focusing upon the SOFC micro-cogeneration system's gas-to-water heat exchanger. The calibration coefficients necessary to accurately simulate the thermal and electrical performance of this prototype device are presented and the types of analyses enabled to study the potential of the technology are demonstrated.

  11. Application of CFD to aerothermal heating problems

    NASA Technical Reports Server (NTRS)

    Macaraeg, M. G.

    1986-01-01

    Numerical solutions of the compressible Navier-Stokes equations by an alternating direction implicit scheme, applied to two experimental investigations are presented. The first is cooling by injection of a gas jet through the nose of an ogive-cone, and the second is the aerothermal environment in the gap formed by the wing and elevon section of a test model of the space shuttle. The simulations demonstrate that accurate pressure calculations are easily obtained on a coarse grid, while convergence is obtained after the residual reduces by four orders of magnitude. Accurate heating rates, however, require a fine grid solution, with convergence requiring at least a reduction of six orders of magnitude in the residual. The effect of artificial dissipation on numerical results is also assessed.

  12. How Many Grid Points are Required for Time Accurate Simulations Scheme Selection and Scale-Discriminant Stabilization

    DTIC Science & Technology

    2015-11-24

    spatial concerns: ¤ how well are gradients captured? (resolution requirement) spatial/temporal concerns: ¤ dispersion and dissipation error...distribution is unlimited. Gradient Capture vs. Resolution: Single Mode FFT: Solution/Derivative: Convergence: f x( )= sin(x) with x∈[0,2π ] df dx...distribution is unlimited. Gradient Capture vs. Resolution: 
 Multiple Modes FFT: Solution/Derivative: Convergence: 6 __ CD02 __ CD04 __ CD06

  13. Challenges to Computational Aerothermodynamic Simulation and Validation for Planetary Entry Vehicle Analysis

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil

    2010-01-01

    Challenges to computational aerothermodynamic (CA) simulation and validation of hypersonic flow over planetary entry vehicles are discussed. Entry, descent, and landing (EDL) of high mass to Mars is a significant driver of new simulation requirements. These requirements include simulation of large deployable, flexible structures and interactions with reaction control system (RCS) and retro-thruster jets. Simulation of radiation and ablation coupled to the flow solver continues to be a high priority for planetary entry analyses, especially for return to Earth and outer planet missions. Three research areas addressing these challenges are emphasized. The first addresses the need to obtain accurate heating on unstructured tetrahedral grid systems to take advantage of flexibility in grid generation and grid adaptation. A multi-dimensional inviscid flux reconstruction algorithm is defined that is oriented with local flow topology as opposed to grid. The second addresses coupling of radiation and ablation to the hypersonic flow solver - flight- and ground-based data are used to provide limited validation of these multi-physics simulations. The third addresses the challenges of retro-propulsion simulation and the criticality of grid adaptation in this application. The evolution of CA to become a tool for innovation of EDL systems requires a successful resolution of these challenges.

  14. System dynamics and simulation of LSS

    NASA Technical Reports Server (NTRS)

    Ryan, R. F.

    1978-01-01

    Large Space Structures have many unique problems arising from mission objectives and the resulting configuration. Inherent in these configurations is a strong coupling among several of the designing disciplines. In particular, the coupling between structural dynamics and control is a key design consideration. The solution to these interactive problems requires efficient and accurate analysis, simulation and test techniques, and properly planned and conducted design trade studies. The discussion presented deals with these subjects and concludes with a brief look at some NASA capabilities which can support these technology studies.

  15. 3D liver volume reconstructed for palpation training.

    PubMed

    Tibamoso, Gerardo; Perez-Gutierrez, Byron; Uribe-Quevedo, Alvaro

    2013-01-01

    Virtual Reality systems for medical procedures such as the palpation of different organs, requires fast, robust, accurate and reliable computational methods for providing realism during interaction with the 3D biological models. This paper presents the segmentation, reconstruction and palpation simulation of a healthy liver volume as a tool for training. The chosen method considers the mechanical characteristics and liver properties for correctly simulating palpation interactions, which results appropriate as a complementary tool for training medical students in familiarizing with the liver anatomy.

  16. Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications

    NASA Technical Reports Server (NTRS)

    Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.

    2016-01-01

    Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.

  17. Electromagnetic Extended Finite Elements for High-Fidelity Multimaterial Problems LDRD Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siefert, Christopher; Bochev, Pavel Blagoveston; Kramer, Richard Michael Jack

    Surface effects are critical to the accurate simulation of electromagnetics (EM) as current tends to concentrate near material surfaces. Sandia EM applications, which include exploding bridge wires for detonator design, electromagnetic launch of flyer plates for material testing and gun design, lightning blast-through for weapon safety, electromagnetic armor, and magnetic flux compression generators, all require accurate resolution of surface effects. These applications operate in a large deformation regime, where body-fitted meshes are impractical and multimaterial elements are the only feasible option. State-of-the-art methods use various mixture models to approximate the multi-physics of these elements. The empirical nature of these modelsmore » can significantly compromise the accuracy of the simulation in this very important surface region. We propose to substantially improve the predictive capability of electromagnetic simulations by removing the need for empirical mixture models at material surfaces. We do this by developing an eXtended Finite Element Method (XFEM) and an associated Conformal Decomposition Finite Element Method (CDFEM) which satisfy the physically required compatibility conditions at material interfaces. We demonstrate the effectiveness of these methods for diffusion and diffusion-like problems on node, edge and face elements in 2D and 3D. We also present preliminary work on h -hierarchical elements and remap algorithms.« less

  18. Numerical investigations in three-dimensional internal flows

    NASA Astrophysics Data System (ADS)

    Rose, William C.

    1988-08-01

    An investigation into the use of computational fluid dynamics (CFD) was performed to examine the expected heat transfer rates that will occur within the NASA-Ames 100 megawatt arc heater nozzle. This nozzle was tentatively designed and identified to provide research for a directly connected combustion experiment specifically related to the National Aerospace Plane Program (NASP) aircraft, and is expected to simulate the flow field entering the combustor section. It was found that extremely fine grids, that is very small mesh spacing near the wall, are required to accurately model the heat transfer process and, in fact, must contain a point within the laminar sublayer if results are to be taken directly from a numerical simulation code. In the present study, an alternative to this very fine mesh and its attendant increase in computational time was invoked and is based on a wall-function method. It was shown that solutions could be obtained that give accurate indications of surface heat transfer rate throughout the nozzle in approximately 1/100 of the computer time required to do the simulation directly without the use of the wall-function implementation. Finally, a maximum heating value in the throat region of the proposed slit nozzle for the 100 megawatt arc heater was shown to be approximately 6 MW per square meter.

  19. Gravitational Waveforms in the Early Inspiral of Binary Black Hole Systems

    NASA Astrophysics Data System (ADS)

    Barkett, Kevin; Kumar, Prayush; Bhagwat, Swetha; Brown, Duncan; Scheel, Mark; Szilagyi, Bela; Simulating eXtreme Spacetimes Collaboration

    2015-04-01

    The inspiral, merger and ringdown of compact object binaries are important targets for gravitational wave detection by aLIGO. Detection and parameter estimation will require long, accurate waveforms for comparison. There are a number of analytical models for generating gravitational waveforms for these systems, but the only way to ensure their consistency and correctness is by comparing with numerical relativity simulations that cover many inspiral orbits. We've simulated a number of binary black hole systems with mass ratio 7 and a moderate, aligned spin on the larger black hole. We have attached these numerical waveforms to analytical waveform models to generate long hybrid gravitational waveforms that span the entire aLIGO frequency band. We analyze the robustness of these hybrid waveforms and measure the faithfulness of different hybrids with each other to obtain an estimate on how long future numerical simulations need to be in order to ensure that waveforms are accurate enough for use by aLIGO.

  20. The effects of strain and stress state in hot forming of mg AZ31 sheet

    NASA Astrophysics Data System (ADS)

    Sherek, Paul A.; Carpenter, Alexander J.; Hector, Louis G.; Krajewski, Paul E.; Carter, Jon T.; Lasceski, Joshua; Taleff, Eric M.

    Wrought magnesium alloys, such as AZ31 sheet, are of considerable interest for light-weighting of vehicle structural components. The poor room-temperature ductility of AZ31 sheet has been a hindrance to forming the complex part shapes necessary for practical applications. However, the outstanding formability of AZ31 sheet at elevated temperature provides an opportunity to overcome that problem. Complex demonstration components have already been produced at 450°C using gas-pressure forming. Accurate simulations of such hot, gas-pressure forming will be required for the design and optimization exercises necessary if this technology is to be implemented commercially. We report on experiments and simulations used to construct the accurate material constitutive models necessary for finite-element-method simulations. In particular, the effects of strain and stress state on plastic deformation of AZ31 sheet at 450°C are considered in material constitutive model development. Material models are validated against data from simple forming experiments.

  1. A novel algorithm for solving the true coincident counting issues in Monte Carlo simulations for radiation spectroscopy.

    PubMed

    Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A

    2015-06-01

    Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.

  2. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less

  3. Advanced particle-in-cell simulation techniques for modeling the Lockheed Martin Compact Fusion Reactor

    NASA Astrophysics Data System (ADS)

    Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David

    2017-10-01

    We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.

  4. Numerical modeling of turbulent swirling flow in a multi-inlet vortex nanoprecipitation reactor using dynamic DDES

    NASA Astrophysics Data System (ADS)

    Hill, James C.; Liu, Zhenping; Fox, Rodney O.; Passalacqua, Alberto; Olsen, Michael G.

    2015-11-01

    The multi-inlet vortex reactor (MIVR) has been developed to provide a platform for rapid mixing in the application of flash nanoprecipitation (FNP) for manufacturing functional nanoparticles. Unfortunately, commonly used RANS methods are unable to accurately model this complex swirling flow. Large eddy simulations have also been problematic, as expensive fine grids to accurately model the flow are required. These dilemmas led to the strategy of applying a Delayed Detached Eddy Simulation (DDES) method to the vortex reactor. In the current work, the turbulent swirling flow inside a scaled-up MIVR has been investigated by using a dynamic DDES model. In the DDES model, the eddy viscosity has a form similar to the Smagorinsky sub-grid viscosity in LES and allows the implementation of a dynamic procedure to determine its coefficient. The complex recirculating back flow near the reactor center has been successfully captured by using this dynamic DDES model. Moreover, the simulation results are found to agree with experimental data for mean velocity and Reynolds stresses.

  5. Reliable oligonucleotide conformational ensemble generation in explicit solvent for force field assessment using reservoir replica exchange molecular dynamics simulations

    PubMed Central

    Henriksen, Niel M.; Roe, Daniel R.; Cheatham, Thomas E.

    2013-01-01

    Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 microseconds of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations. PMID:23477537

  6. Reliable oligonucleotide conformational ensemble generation in explicit solvent for force field assessment using reservoir replica exchange molecular dynamics simulations.

    PubMed

    Henriksen, Niel M; Roe, Daniel R; Cheatham, Thomas E

    2013-04-18

    Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example, by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 μs of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations.

  7. Simulation of a Doppler lidar system for autonomous navigation and hazard avoidance during planetary landing

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Chester, David B.

    2016-05-01

    The latest mission proposals for exploration of solar system bodies require accurate position and velocity data during the descent phase in order to ensure safe, soft landing at the pre-designated sites. During landing maneuvers, the accuracy of the on-board inertial measurement unit (IMU) may not be reliable due to drift over extended travel times to destinations. NASA has proposed an advanced Doppler lidar system with multiple beams that can be used to accurately determine attitude and position of the landing vehicle during descent, and to detect hazards that might exist in the landing area. In order to assess the effectiveness of such a Doppler lidar landing system, it is valuable to simulate the system with different beam numbers and configurations. In addition, the effectiveness of the system to detect and map potential landing hazards must be understood. This paper reports the simulated system performance for a proposed multi-beam Doppler lidar using the LadarSIM system simulation software. Details of the simulation methods are given, as well as lidar performance parameters such as range and velocity accuracy, detection and false alarm rates, and examples of the Doppler lidars ability to detect and characterize simulated hazards in the landing site. The simulation includes modulated pulse generation and coherent detection methods, beam footprint simulation, beam scanning, and interaction with terrain.

  8. Production rates for crews using hand tools on firelines

    Treesearch

    Lisa Haven; T. Parkin Hunter; Theodore G. Storey

    1982-01-01

    Reported rates at which hand crews construct firelines can vary widely because of differences in fuels, fire and measurement conditions, and fuel resistance-to-control classification schemes. Real-time fire dispatching and fire simulation planning models, however, require accurate estimates of hand crew productivity. Errors in estimating rate of fireline production...

  9. Multiscale Reactive Molecular Dynamics

    DTIC Science & Technology

    2012-08-15

    biology cannot be described without considering electronic and nuclear-level dynamics and their coupling to slower, cooperative motions of the system ...coupling to slower, cooperative motions of the system . These inherently multiscale problems require computationally efficient and accurate methods to...condensed phase systems with computational efficiency orders of magnitudes greater than currently possible with ab initio simulation methods, thus

  10. Rocket Fuel R and D at AFRL: Recent Activities and Future Direction

    DTIC Science & Technology

    2017-04-12

    Clearance Number 17163 Rocket Cycles and Environments SpaceX Merlin 1D 190 klbf Russian RD-180 860 klbf Gas Generator Cycle Ox-Rich Staged Combustion...affordability & reusability • Modeling & Simulation • Key to development • Requires accurate models “CFD simulations… shorten the test-fail-fix loop” SpaceX

  11. Using virtual reality for science mission planning: A Mars Pathfinder case

    NASA Technical Reports Server (NTRS)

    Kim, Jacqueline H.; Weidner, Richard J.; Sacks, Allan L.

    1994-01-01

    NASA's Mars Pathfinder Project requires a Ground Data System (GDS) that supports both engineering and scientific payloads with reduced mission operations staffing, and short planning schedules. Also, successful surface operation of the lander camera requires efficient mission planning and accurate pointing of the camera. To meet these challenges, a new software strategy that integrates virtual reality technology with existing navigational ancillary information and image processing capabilities. The result is an interactive workstation based applications software that provides a high resolution, 3-dimensial, stereo display of Mars as if it were viewed through the lander camera. The design, implementation strategy and parametric specification phases for the development of this software were completed, and the prototype tested. When completed, the software will allow scientists and mission planners to access simulated and actual scenes of Mars' surface. The perspective from the lander camera will enable scientists to plan activities more accurately and completely. The application will also support the sequence and command generation process and will allow testing and verification of camera pointing commands via simulation.

  12. Calibration of 3D ALE finite element model from experiments on friction stir welding of lap joints

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Gastebois, Sabrina; Dubourg, Laurent

    2016-10-01

    In order to support the design of such a complex process like Friction Stir Welding (FSW) for the aeronautic industry, numerical simulation software requires (1) developing an efficient and accurate Finite Element (F.E.) formulation that allows predicting welding defects, (2) properly modeling the thermo-mechanical complexity of the FSW process and (3) calibrating the F.E. model from accurate measurements from FSW experiments. This work uses a parallel ALE formulation developed in the Forge® F.E. code to model the different possible defects (flashes and worm holes), while pin and shoulder threads are modeled by a new friction law at the tool / material interface. FSW experiments require using a complex tool with scroll on shoulder, which is instrumented for providing sensitive thermal data close to the joint. Calibration of unknown material thermal coefficients, constitutive equations parameters and friction model from measured forces, torques and temperatures is carried out using two F.E. models, Eulerian and ALE, to reach a satisfactory agreement assessed by the proper sensitivity of the simulation to process parameters.

  13. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less

  14. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    NASA Astrophysics Data System (ADS)

    Duru, Kenneth; Dunham, Eric M.

    2016-01-01

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.

  15. Track-structure simulations for charged particles.

    PubMed

    Dingfelder, Michael

    2012-11-01

    Monte Carlo track-structure simulations provide a detailed and accurate picture of radiation transport of charged particles through condensed matter of biological interest. Liquid water serves as a surrogate for soft tissue and is used in most Monte Carlo track-structure codes. Basic theories of radiation transport and track-structure simulations are discussed and differences compared to condensed history codes highlighted. Interaction cross sections for electrons, protons, alpha particles, and light and heavy ions are required input data for track-structure simulations. Different calculation methods, including the plane-wave Born approximation, the dielectric theory, and semi-empirical approaches are presented using liquid water as a target. Low-energy electron transport and light ion transport are discussed as areas of special interest.

  16. Using high hydraulic conductivity nodes to simulate seepage lakes

    USGS Publications Warehouse

    Anderson, Mary P.; Hunt, Randall J.; Krohelski, James T.; Chung, Kuopo

    2002-01-01

    In a typical ground water flow model, lakes are represented by specified head nodes requiring that lake levels be known a priori. To remove this limitation, previous researchers assigned high hydraulic conductivity (K) values to nodes that represent a lake, under the assumption that the simulated head at the nodes in the high-K zone accurately reflects lake level. The solution should also produce a constant water level across the lake. We developed a model of a simple hypothetical ground water/lake system to test whether solutions using high-K lake nodes are sensitive to the value of K selected to represent the lake. Results show that the larger the contrast between the K of the aquifer and the K of the lake nodes, the smaller the error tolerance required for the solution to converge. For our test problem, a contrast of three orders of magnitude produced a head difference across the lake of 0.005 m under a regional gradient of the order of 10−3 m/m, while a contrast of four orders of magnitude produced a head difference of 0.001 m. The high-K method was then used to simulate lake levels in Pretty Lake, Wisconsin. Results for both the hypothetical system and the application to Pretty Lake compared favorably with results using a lake package developed for MODFLOW (Merritt and Konikow 2000). While our results demonstrate that the high-K method accurately simulates lake levels, this method has more cumbersome postprocessing and longer run times than the same problem simulated using the lake package.

  17. Investigating cognitive ability and self-reported driving performance of post-stroke adults in a driving simulator.

    PubMed

    Blane, Alison; Falkmer, Torbjörn; Lee, Hoe C; Dukic Willstrand, Tania

    2018-01-01

    Background Safe driving is a complex activity that requires calibration. This means the driver can accurately assess the level of task demand required for task completion and can accurately evaluate their driving capability. There is much debate on the calibration ability of post-stroke drivers. Objectives The aim of this study was to assess the cognition, self-rated performance, and estimation of task demand in a driving simulator with post-stroke drivers and controls. Methods A between-groups study design was employed, which included a post-stroke driver group and a group of similarly aged older control drivers. Both groups were observed driving in two simulator-based driving scenarios and asked to complete the NASA Task Load Index (TLX) to assess their perceived task demand and self-rate their driving performance. Participants also completed a battery of psychometric tasks to assess attention and executive function, which was used to determine whether post-stroke cognitive impairment impacted on calibration. Results There was no difference in the amount of perceived task demand required to complete the driving task. Despite impairments in cognition, the post-stroke drivers were not more likely to over-estimate their driving abilities than controls. On average, the post-stroke drivers self-rated themselves more poorly than the controls and this rating was related to cognitive ability. Conclusion This study suggests that post-stroke drivers may be aware of their deficits and adjust their driving behavior. Furthermore, using self-performance measures alongside a driving simulator and cognitive assessments may provide complementary fitness-to-drive assessments, as well as rehabilitation tools during post-stroke recovery.

  18. Computational Challenges of Viscous Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin; Kim, Chang Sung

    2004-01-01

    Over the past thirty years, numerical methods and simulation tools for incompressible flows have been advanced as a subset of the computational fluid dynamics (CFD) discipline. Although incompressible flows are encountered in many areas of engineering, simulation of compressible flow has been the major driver for developing computational algorithms and tools. This is probably due to the rather stringent requirements for predicting aerodynamic performance characteristics of flight vehicles, while flow devices involving low-speed or incompressible flow could be reasonably well designed without resorting to accurate numerical simulations. As flow devices are required to be more sophisticated and highly efficient CFD took become increasingly important in fluid engineering for incompressible and low-speed flow. This paper reviews some of the successes made possible by advances in computational technologies during the same period, and discusses some of the current challenges faced in computing incompressible flows.

  19. Mathematical simulations for bioanalytical assay development: the (un-)necessity and (im-)possibility of free drug quantification.

    PubMed

    Staack, Roland F; Jordan, Gregor; Heinrich, Julia

    2012-02-01

    For every drug development program it needs to be discussed whether discrimination between free and total drug concentrations is required to accurately describe its pharmacokinetic behavior. This perspective describes the application of mathematical simulation approaches to guide this initial decision based on available knowledge about target biology, binding kinetics and expected drug concentrations. We provide generic calculations that can be used to estimate the necessity of free drug quantification for different drug molecules. In addition, mathematical approaches are used to simulate various assay conditions in bioanalytical ligand-binding assays: it is demonstrated that due to the noncovalent interaction between the binding partners and typical assay-related interferences in the equilibrium, a correct quantification of the free drug concentration is highly challenging and requires careful design of different assay procedure steps.

  20. Science based integrated approach to advanced nuclear fuel development - integrated multi-scale multi-physics hierarchical modeling and simulation framework Part III: cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tome, Carlos N; Caro, J A; Lebensohn, R A

    2010-01-01

    Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less

  1. Modeling of profilometry with laser focus sensors

    NASA Astrophysics Data System (ADS)

    Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner

    2011-05-01

    Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.

  2. Fast simulation of Proton Induced X-Ray Emission Tomography using CUDA

    NASA Astrophysics Data System (ADS)

    Beasley, D. G.; Marques, A. C.; Alves, L. C.; da Silva, R. C.

    2013-07-01

    A new 3D Proton Induced X-Ray Emission Tomography (PIXE-T) and Scanning Transmission Ion Microscopy Tomography (STIM-T) simulation software has been developed in Java and uses NVIDIA™ Common Unified Device Architecture (CUDA) to calculate the X-ray attenuation for large detector areas. A challenge with PIXE-T is to get sufficient counts while retaining a small beam spot size. Therefore a high geometric efficiency is required. However, as the detector solid angle increases the calculations required for accurate reconstruction of the data increase substantially. To overcome this limitation, the CUDA parallel computing platform was used which enables general purpose programming of NVIDIA graphics processing units (GPUs) to perform computations traditionally handled by the central processing unit (CPU). For simulation performance evaluation, the results of a CPU- and a CUDA-based simulation of a phantom are presented. Furthermore, a comparison with the simulation code in the PIXE-Tomography reconstruction software DISRA (A. Sakellariou, D.N. Jamieson, G.J.F. Legge, 2001) is also shown. Compared to a CPU implementation, the CUDA based simulation is approximately 30× faster.

  3. Perturbations in the initial soil moisture conditions: Impacts on hydrologic simulation in a large river basin

    NASA Astrophysics Data System (ADS)

    Niroula, Sundar; Halder, Subhadeep; Ghosh, Subimal

    2018-06-01

    Real time hydrologic forecasting requires near accurate initial condition of soil moisture; however, continuous monitoring of soil moisture is not operational in many regions, such as, in Ganga basin, extended in Nepal, India and Bangladesh. Here, we examine the impacts of perturbation/error in the initial soil moisture conditions on simulated soil moisture and streamflow in Ganga basin and its propagation, during the summer monsoon season (June to September). This provides information regarding the required minimum duration of model simulation for attaining the model stability. We use the Variable Infiltration Capacity model for hydrological simulations after validation. Multiple hydrologic simulations are performed, each of 21 days, initialized on every 5th day of the monsoon season for deficit, surplus and normal monsoon years. Each of these simulations is performed with the initial soil moisture condition obtained from long term runs along with positive and negative perturbations. The time required for the convergence of initial errors is obtained for all the cases. We find a quick convergence for the year with high rainfall as well as for the wet spells within a season. We further find high spatial variations in the time required for convergence; the region with high precipitation such as Lower Ganga basin attains convergence at a faster rate. Furthermore, deeper soil layers need more time for convergence. Our analysis is the first attempt on understanding the sensitivity of hydrological simulations of Ganga basin on initial soil moisture conditions. The results obtained here may be useful in understanding the spin-up requirements for operational hydrologic forecasts.

  4. DKIST Adaptive Optics System: Simulation Results

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Schmidt, Dirk

    2016-05-01

    The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.

  5. How important is getting the land surface energy exchange correct in WRF for wind energy forecasting?

    NASA Astrophysics Data System (ADS)

    Wharton, S.; Simpson, M.; Osuna, J. L.; Newman, J. F.; Biraud, S.

    2013-12-01

    Wind power forecasting is plagued with difficulties in accurately predicting the occurrence and intensity of atmospheric conditions at the heights spanned by industrial-scale turbines (~ 40 to 200 m above ground level). Better simulation of the relevant physics would enable operational practices such as integration of large fractions of wind power into power grids, scheduling maintenance on wind energy facilities, and deciding design criteria based on complex loads for next-generation turbines and siting. Accurately simulating the surface energy processes in numerical models may be critically important for wind energy forecasting as energy exchange at the surface strongly drives atmospheric mixing (i.e., stability) in the lower layers of the planetary boundary layer (PBL), which in turn largely determines wind shear and turbulence at heights found in the turbine rotor-disk. We hypothesize that simulating accurate a surface-atmosphere energy coupling should lead to more accurate predictions of wind speed and turbulence at heights within the turbine rotor-disk. Here, we tested 10 different land surface model configurations in the Weather Research and Forecasting (WRF) model including Noah, Noah-MP, SSiB, Pleim-Xiu, RUC, and others to evaluate (1) the accuracy of simulated surface energy fluxes to flux tower measurements, (2) the accuracy of forecasted wind speeds to observations at rotor-disk heights, and (3) the sensitivity of forecasting hub-height rotor disk wind speed to the choice of land surface model. WRF was run for four, two-week periods covering both summer and winter periods over the Southern Great Plains ARM site in Oklahoma. Continuous measurements of surface energy fluxes and lidar-based wind speed, direction and turbulence were also available. The SGP ARM site provided an ideal location for this evaluation as it centrally located in the wind-rich Great Plains and multi-MW wind farms are rapidly expanding in the area. We found significant differences in simulated wind speeds at rotor-disk heights from WRF which indicated, in part, the sensitivity of lower PBL winds to surface energy exchange. We also found significant differences in energy partitioning between sensible heat and latent energy depending on choice of land surface model. Overall, the most consistent, accurate model results were produced using Noah-MP. Noah-MP was most accurate at simulating energy fluxes and wind shear. Hub-height wind speed, however, was predicted with most accuracy with Pleim-Xiu. This suggests that simulating wind shear in the surface layer is consistent with accurately simulating surface energy exchange while the exact magnitudes of wind speed may be more strongly influenced by the PBL dynamics. As the nation is working towards a 20% wind energy goal by 2030, increasing the accuracy of wind forecasting at rotor-disk heights becomes more important considering that utilities require wind farms to estimate their power generation 24 to 36 hours ahead and face penalties for inaccuracies in those forecasts.

  6. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  7. Advancing Nucleosynthesis in Core-Collapse Supernovae Models Using 2D CHIMERA Simulations

    NASA Astrophysics Data System (ADS)

    Harris, J. A.; Hix, W. R.; Chertkow, M. A.; Bruenn, S. W.; Lentz, E. J.; Messer, O. B.; Mezzacappa, A.; Blondin, J. M.; Marronetti, P.; Yakunin, K.

    2014-01-01

    The deaths of massive stars as core-collapse supernovae (CCSN) serve as a crucial link in understanding galactic chemical evolution since the birth of the universe via the Big Bang. We investigate CCSN in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species α-network. However, the limited capacity of the α-network to accurately evolve detailed composition, the neutronization and the nuclear energy generation rate has fettered the ability of prior CCSN simulations to accurately reproduce the chemical abundances and energy distributions as known from observations. These deficits can be partially ameliorated by "post-processing" with a more realistic network. Lagrangian tracer particles placed throughout the star record the temporal evolution of the initial simulation and enable the extension of the nuclear network evolution by incorporating larger systems in post-processing nucleosynthesis calculations. We present post-processing results of the four ab initio axisymmetric CCSN 2D models of Bruenn et al. (2013) evolved with the smaller α-network, and initiated from stellar metallicity, non-rotating progenitors of mass 12, 15, 20, and 25 M⊙ from Woosley & Heger (2007). As a test of the limitations of post-processing, we provide preliminary results from an ongoing simulation of the 15 M⊙ model evolved with a realistic 150 species nuclear reaction network in situ. With more accurate energy generation rates and an improved determination of the thermodynamic trajectories of the tracer particles, we can better unravel the complicated multidimensional "mass-cut" in CCSN simulations and probe for less energetically significant nuclear processes like the νp-process and the r-process, which require still larger networks.

  8. Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation

    PubMed Central

    Lee, Jisun; Kwon, Jay Hyoun; Yu, Myeongjong

    2015-01-01

    In this study, simulation tests for gravity gradient referenced navigation (GGRN) are conducted to verify the effects of various factors such as database (DB) and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN). In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available. PMID:26184212

  9. Opportunities to Intercalibrate Radiometric Sensors From International Space Station

    NASA Technical Reports Server (NTRS)

    Roithmayr, C. M.; Lukashin, C.; Speth, P. W.; Thome, K. J.; Young, D. F.; Wielicki, B. A.

    2012-01-01

    Highly accurate measurements of Earth's thermal infrared and reflected solar radiation are required for detecting and predicting long-term climate change. We consider the concept of using the International Space Station to test instruments and techniques that would eventually be used on a dedicated mission such as the Climate Absolute Radiance and Refractivity Observatory. In particular, a quantitative investigation is performed to determine whether it is possible to use measurements obtained with a highly accurate reflected solar radiation spectrometer to calibrate similar, less accurate instruments in other low Earth orbits. Estimates of numbers of samples useful for intercalibration are made with the aid of year-long simulations of orbital motion. We conclude that the International Space Station orbit is ideally suited for the purpose of intercalibration.

  10. Design, construction, and evaluation of a 1:8 scale model binaural manikin.

    PubMed

    Robinson, Philip; Xiang, Ning

    2013-03-01

    Many experiments in architectural acoustics require presenting listeners with simulations of different rooms to compare. Acoustic scale modeling is a feasible means to create accurate simulations of many rooms at reasonable cost. A critical component in a scale model room simulation is a receiver that properly emulates a human receiver. For this purpose, a scale model artificial head has been constructed and tested. This paper presents the design and construction methods used, proper equalization procedures, and measurements of its response. A headphone listening experiment examining sound externalization with various reflection conditions is presented that demonstrates its use for psycho-acoustic testing.

  11. High-Voltage, Low-Power BNC Feedthrough Terminator

    NASA Technical Reports Server (NTRS)

    Bearden, Douglas

    2012-01-01

    This innovation is a high-voltage, lowpower BNC (Bayonet Neill-Concelman) feedthrough that enables the user to terminate an instrumentation cable properly while connected to a high voltage, without the use of a voltage divider. This feedthrough is low power, which will not load the source, and will properly terminate the instrumentation cable to the instrumentation, even if the cable impedance is not constant. The Space Shuttle Program had a requirement to measure voltage transients on the orbiter bus through the Ground Lightning Measurement System (GLMS). This measurement has a bandwidth requirement of 1 MHz. The GLMS voltage measurement is connected to the orbiter through a DC panel. The DC panel is connected to the bus through a nonuniform cable that is approximately 75 ft (approximately equal to 23 m) long. A 15-ft (approximately equal to 5-m), 50-ohm triaxial cable is connected between the DC panel and the digitizer. Based on calculations and simulations, cable resonances and reflections due to mismatched impedances of the cable connecting the orbiter bus and the digitizer causes the output not to reflect accurately what is on the bus. A voltage divider at the DC panel, and terminating the 50-ohm cable properly, would eliminate this issue. Due to implementation issues, an alternative design was needed to terminate the cable properly without the use of a voltage divider. Analysis shows how the cable resonances and reflections due to the mismatched impedances of the cable connecting the orbiter bus and the digitizer causes the output not to reflect accurately what is on the bus. After simulating a dampening circuit located at the digitizer, simulations were performed to show how the cable resonances were dampened and the accuracy was improved significantly. Test cables built to verify simulations were accurate. Since the dampening circuit is low power, it can be packaged in a BNC feedthrough.

  12. A Flight-Calibrated Methodology for Determination of Cassini Thruster On-Times for Reaction Wheel Biases

    NASA Technical Reports Server (NTRS)

    Sarani, Siamak

    2010-01-01

    This paper describes a methodology for accurate and flight-calibrated determination of the on-times of the Cassini spacecraft Reaction Control System (RCS) thrusters, without any form of dynamic simulation, for the reaction wheel biases. The hydrazine usage and the delta V vector in body frame are also computed from the respective thruster on-times. The Cassini spacecraft, the largest and most complex interplanetary spacecraft ever built, continues to undertake ambitious and unique scientific observations of planet Saturn, Titan, Enceladus, and other moons of Saturn. In order to maintain a stable attitude during the course of its mission, this three-axis stabilized spacecraft uses two different control systems: the RCS and the reaction wheel assembly control system. The RCS is used to execute a commanded spacecraft slew, to maintain three-axis attitude control, control spacecraft's attitude while performing science observations with coarse pointing requirements, e.g. during targeted low-altitude Titan and Enceladus flybys, bias the momentum of reaction wheels, and to perform RCS-based orbit trim maneuvers. The use of RCS often imparts undesired delta V on the spacecraft. The Cassini navigation team requires accurate predictions of the delta V in spacecraft coordinates and inertial frame resulting from slews using RCS thrusters and more importantly from reaction wheel bias events. It is crucial for the Cassini spacecraft attitude control and navigation teams to be able to, quickly but accurately, predict the hydrazine usage and delta V for various reaction wheel bias events without actually having to spend time and resources simulating the event in flight software-based dynamic simulation or hardware-in-the-loop simulation environments. The methodology described in this paper, and the ground software developed thereof, are designed to provide just that. This methodology assumes a priori knowledge of thrust magnitudes and thruster pulse rise and tail-off time constants for eight individual attitude control thrusters, the spacecraft's wet mass and its center of mass location, and a few other key parameters.

  13. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  14. Effective precipitation duration for runoff peaks based on catchment modelling

    NASA Astrophysics Data System (ADS)

    Sikorska, A. E.; Viviroli, D.; Seibert, J.

    2018-01-01

    Despite precipitation intensities may greatly vary during one flood event, detailed information about these intensities may not be required to accurately simulate floods with a hydrological model which rather reacts to cumulative precipitation sums. This raises two questions: to which extent is it important to preserve sub-daily precipitation intensities and how long does it effectively rain from the hydrological point of view? Both questions might seem straightforward to answer with a direct analysis of past precipitation events but require some arbitrary choices regarding the length of a precipitation event. To avoid these arbitrary decisions, here we present an alternative approach to characterize the effective length of precipitation event which is based on runoff simulations with respect to large floods. More precisely, we quantify the fraction of a day over which the daily precipitation has to be distributed to faithfully reproduce the large annual and seasonal floods which were generated by the hourly precipitation rate time series. New precipitation time series were generated by first aggregating the hourly observed data into daily totals and then evenly distributing them over sub-daily periods (n hours). These simulated time series were used as input to a hydrological bucket-type model and the resulting runoff flood peaks were compared to those obtained when using the original precipitation time series. We define then the effective daily precipitation duration as the number of hours n, for which the largest peaks are simulated best. For nine mesoscale Swiss catchments this effective daily precipitation duration was about half a day, which indicates that detailed information on precipitation intensities is not necessarily required to accurately estimate peaks of the largest annual and seasonal floods. These findings support the use of simple disaggregation approaches to make usage of past daily precipitation observations or daily precipitation simulations (e.g. from climate models) for hydrological modeling at an hourly time step.

  15. Low-cycle fatigue testing methods

    NASA Technical Reports Server (NTRS)

    Lieurade, H. P.

    1978-01-01

    The good design of highly stressed mechanical components requires accurate knowledge of the service behavior of materials. The main methods for solving the problems of designers are: determination of the mechanical properties of the material after cyclic stabilization; plotting of resistance to plastic deformation curves; effect of temperature on the life on low cycle fatigue; and simulation of notched parts behavior.

  16. Quantifying watershed surface depression storage: determination and application in a hydrologic model

    Treesearch

    Joseph K. O. Amoah; Devendra M. Amatya; Soronnadi Nnaji

    2012-01-01

    Hydrologic models often require correct estimates of surface macro-depressional storage to accurately simulate rainfall–runoff processes. Traditionally, depression storage is determined through model calibration or lumped with soil storage components or on an ad hoc basis. This paper investigates a holistic approach for estimating surface depressional storage capacity...

  17. An imputed forest composition map for New England screened by species range boundaries

    Treesearch

    Matthew J. Duveneck; Jonathan R. Thompson; B. Tyler Wilson

    2015-01-01

    Initializing forest landscape models (FLMs) to simulate changes in tree species composition requires accurate fine-scale forest attribute information mapped continuously over large areas. Nearest-neighbor imputation maps, maps developed from multivariate imputation of field plots, have high potential for use as the initial condition within FLMs, but the tendency for...

  18. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  19. Reservoir simulation with the cubic plus (cross-) association equation of state for water, CO2, hydrocarbons, and tracers

    NASA Astrophysics Data System (ADS)

    Moortgat, Joachim

    2018-04-01

    This work presents an efficient reservoir simulation framework for multicomponent, multiphase, compressible flow, based on the cubic-plus-association (CPA) equation of state (EOS). CPA is an accurate EOS for mixtures that contain non-polar hydrocarbons, self-associating polar water, and cross-associating molecules like methane, ethane, unsaturated hydrocarbons, CO2, and H2S. While CPA is accurate, its mathematical formulation is highly non-linear, resulting in excessive computational costs that have made the EOS unfeasible for large scale reservoir simulations. This work presents algorithms that overcome these bottlenecks and achieve an efficiency comparable to the much simpler cubic EOS approach. The main applications that require such accurate phase behavior modeling are 1) the study of methane leakage from high-pressure production wells and its potential impact on groundwater resources, 2) modeling of geological CO2 sequestration in brine aquifers when one is interested in more than the CO2 and H2O components, e.g. methane, other light hydrocarbons, and various tracers, and 3) enhanced oil recovery by CO2 injection in reservoirs that have previously been waterflooded or contain connate water. We present numerical examples of all those scenarios, extensive validation of the CPA EOS with experimental data, and analyses of the efficiency of our proposed numerical schemes. The accuracy, efficiency, and robustness of the presented phase split computations pave the way to more widespread adoption of CPA in reservoir simulators.

  20. SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip

    Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutionsmore » of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.« less

  1. Neurosurgery simulation using non-linear finite element modeling and haptic interaction

    NASA Astrophysics Data System (ADS)

    Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet

    2012-02-01

    Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.

  2. Breathing metabolic simulator

    NASA Technical Reports Server (NTRS)

    Bartlett, R. G.; Hendricks, C. M.; Morison, W. B.

    1972-01-01

    The development of a breathing metabolic simulator (BMS) is reported. This BMS simulates all of the breathing and metabolic parameters required for complete evaluation and test of life support and resuscitation equipment. It is also useful for calibrating and validating mechanical and gaseous pulmonary function test procedures. Breathing rate, breathing depth, breath velocity contour, oxygen uptake, and carbon dioxide release are all variable over wide ranges simulating conditions from sleep to hard work with respiratory exchange ratios covering the range from hypoventilation. In addition, all of these parameters are remotely controllable to facilitate use of the device in hostile or remote environments. The exhaled breath is also maintained at body temperature and a high humidity. The simulation is accurate to the extent of having a variable functional residual capacity independent of other parameters.

  3. Simulations of binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey

    2017-01-01

    Advanced LIGO's observations of merging binary black holes have inaugurated the era of gravitational wave astronomy. Accurate models of binary black holes and the gravitational waves they emit are helping Advanced LIGO to find as many gravitational waves as possible and to learn as much as possible about the waves' sources. These models require numerical-relativity simulations of binary black holes, because near the time when the black holes merge, all analytic approximations break down. Following breakthroughs in 2005, many research groups have built numerical-relativity codes capable of simulating binary black holes. In this talk, I will discuss current challenges in simulating binary black holes for gravitational-wave astronomy, and I will discuss the tremendous progress that has already enabled such simulations to become an essential tool for Advanced LIGO.

  4. Suitability of Synthetic Driving Profiles from Traffic Micro-Simulation for Real-World Energy Analysis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Yunfei; Wood, Eric; Burton, Evan

    A shift towards increased levels of driving automation is generally expected to result in improved safety and traffic congestion outcomes. However, little empirical data exists to estimate the impact that automated driving could have on energy consumption and greenhouse gas emissions. In the absence of empirical data on differences between drive cycles from present day vehicles (primarily operated by humans) and future vehicles (partially or fully operated by computers) one approach is to model both situations over identical traffic conditions. Such an exercise requires traffic micro-simulation to not only accurately model vehicle operation under high levels of automation, but alsomore » (and potentially more challenging) vehicle operation under present day human drivers. This work seeks to quantify the ability of a commercial traffic micro-simulation program to accurately model real-world drive cycles in vehicles operated primarily by humans in terms of driving speed, acceleration, and simulated fuel economy. Synthetic profiles from models of freeway and arterial facilities near Atlanta, Georgia, are compared to empirical data collected from real-world drivers on the same facilities. Empirical and synthetic drive cycles are then simulated in a powertrain efficiency model to enable comparison on the basis of fuel economy. Synthetic profiles from traffic micro-simulation were found to exhibit low levels of transient behavior relative to the empirical data. Even with these differences, the synthetic and empirical data in this study agree well in terms of driving speed and simulated fuel economy. The differences in transient behavior between simulated and empirical data suggest that larger stochastic contributions in traffic micro-simulation (relative to those present in the traffic micro-simulation tool used in this study) are required to fully capture the arbitrary elements of human driving. Interestingly, the lack of stochastic contributions from models of human drivers in this study did not result in a significant discrepancy between fuel economy simulations based on synthetic and empirical data; a finding with implications on the potential energy efficiency gains of automated vehicle technology.« less

  5. Self-Consistent Model of Magnetospheric Electric Field, Ring Current, Plasmasphere, and Electromagnetic Ion Cyclotron Waves: Initial Results

    NASA Technical Reports Server (NTRS)

    Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.

    2009-01-01

    Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.

  6. Assembly-line Simulation Program

    NASA Technical Reports Server (NTRS)

    Chamberlain, Robert G.; Zendejas, Silvino; Malhotra, Shan

    1987-01-01

    Costs and profits estimated for models based on user inputs. Standard Assembly-line Manufacturing Industry Simulation (SAMIS) program generalized so useful for production-line manufacturing companies. Provides accurate and reliable means of comparing alternative manufacturing processes. Used to assess impact of changes in financial parameters as cost of resources and services, inflation rates, interest rates, tax policies, and required rate of return of equity. Most important capability is ability to estimate prices manufacturer would have to receive for its products to recover all of costs of production and make specified profit. Written in TURBO PASCAL.

  7. Recommendations on Model Fidelity for Wind Turbine Gearbox Simulations; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, J.; Lacava, W.; Austin, J.

    2015-02-01

    This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.

  8. BioFVM: an efficient, parallelized diffusive transport solver for 3-D biological simulations

    PubMed Central

    Ghaffarizadeh, Ahmadreza; Friedman, Samuel H.; Macklin, Paul

    2016-01-01

    Motivation: Computational models of multicellular systems require solving systems of PDEs for release, uptake, decay and diffusion of multiple substrates in 3D, particularly when incorporating the impact of drugs, growth substrates and signaling factors on cell receptors and subcellular systems biology. Results: We introduce BioFVM, a diffusive transport solver tailored to biological problems. BioFVM can simulate release and uptake of many substrates by cell and bulk sources, diffusion and decay in large 3D domains. It has been parallelized with OpenMP, allowing efficient simulations on desktop workstations or single supercomputer nodes. The code is stable even for large time steps, with linear computational cost scalings. Solutions are first-order accurate in time and second-order accurate in space. The code can be run by itself or as part of a larger simulator. Availability and implementation: BioFVM is written in C ++ with parallelization in OpenMP. It is maintained and available for download at http://BioFVM.MathCancer.org and http://BioFVM.sf.net under the Apache License (v2.0). Contact: paul.macklin@usc.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26656933

  9. Simulation Evaluation of Pilot Inputs for Real Time Modeling During Commercial Flight Operations

    NASA Technical Reports Server (NTRS)

    Martos, Borja; Ranaudo, Richard; Oltman, Ryan; Myhre, Nick

    2017-01-01

    Aircraft dynamics characteristics can only be identified from flight data when the aircraft dynamics are excited sufficiently. A preliminary study was conducted into what types and levels of manual piloted control excitation would be required for accurate Real-Time Parameter IDentification (RTPID) results by commercial airline pilots. This includes assessing the practicality for the pilot to provide this excitation when cued, and to further understand if pilot inputs during various phases of flight provide sufficient excitation naturally. An operationally representative task was evaluated by 5 commercial airline pilots using the NASA Ice Contamination Effects Flight Training Device (ICEFTD). Results showed that it is practical to use manual pilot inputs only as a means of achieving good RTPID in all phases of flight and in flight turbulence conditions. All pilots were effective in satisfying excitation requirements when cued. Much of the time, cueing was not even necessary, as just performing the required task provided enough excitation for accurate RTPID estimation. Pilot opinion surveys reported that the additional control inputs required when prompted by the excitation cueing were easy to make, quickly mastered, and required minimal training.

  10. Computational Methods for HSCT-Inlet Controls/CFD Interdisciplinary Research

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Melcher, Kevin J.; Chicatelli, Amy K.; Hartley, Tom T.; Chung, Joongkee

    1994-01-01

    A program aimed at facilitating the use of computational fluid dynamics (CFD) simulations by the controls discipline is presented. The objective is to reduce the development time and cost for propulsion system controls by using CFD simulations to obtain high-fidelity system models for control design and as numerical test beds for control system testing and validation. An interdisciplinary team has been formed to develop analytical and computational tools in three discipline areas: controls, CFD, and computational technology. The controls effort has focused on specifying requirements for an interface between the controls specialist and CFD simulations and a new method for extracting linear, reduced-order control models from CFD simulations. Existing CFD codes are being modified to permit time accurate execution and provide realistic boundary conditions for controls studies. Parallel processing and distributed computing techniques, along with existing system integration software, are being used to reduce CFD execution times and to support the development of an integrated analysis/design system. This paper describes: the initial application for the technology being developed, the high speed civil transport (HSCT) inlet control problem; activities being pursued in each discipline area; and a prototype analysis/design system in place for interactive operation and visualization of a time-accurate HSCT-inlet simulation.

  11. Accuracy of buffered-force QM/MM simulations of silica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peguiron, Anke; Moras, Gianpietro; Colombi Ciacchi, Lucio

    2015-02-14

    We report comparisons between energy-based quantum mechanics/molecular mechanics (QM/MM) and buffered force-based QM/MM simulations in silica. Local quantities—such as density of states, charges, forces, and geometries—calculated with both QM/MM approaches are compared to the results of full QM simulations. We find the length scale over which forces computed using a finite QM region converge to reference values obtained in full quantum-mechanical calculations is ∼10 Å rather than the ∼5 Å previously reported for covalent materials such as silicon. Electrostatic embedding of the QM region in the surrounding classical point charges gives only a minor contribution to the force convergence. Whilemore » the energy-based approach provides accurate results in geometry optimizations of point defects, we find that the removal of large force errors at the QM/MM boundary provided by the buffered force-based scheme is necessary for accurate constrained geometry optimizations where Si–O bonds are elongated and for finite-temperature molecular dynamics simulations of crack propagation. Moreover, the buffered approach allows for more flexibility, since special-purpose QM/MM coupling terms that link QM and MM atoms are not required and the region that is treated at the QM level can be adaptively redefined during the course of a dynamical simulation.« less

  12. Free energies from dynamic weighted histogram analysis using unbiased Markov state model.

    PubMed

    Rosta, Edina; Hummer, Gerhard

    2015-01-13

    The weighted histogram analysis method (WHAM) is widely used to obtain accurate free energies from biased molecular simulations. However, WHAM free energies can exhibit significant errors if some of the biasing windows are not fully equilibrated. To account for the lack of full equilibration, we develop the dynamic histogram analysis method (DHAM). DHAM uses a global Markov state model to obtain the free energy along the reaction coordinate. A maximum likelihood estimate of the Markov transition matrix is constructed by joint unbiasing of the transition counts from multiple umbrella-sampling simulations along discretized reaction coordinates. The free energy profile is the stationary distribution of the resulting Markov matrix. For this matrix, we derive an explicit approximation that does not require the usual iterative solution of WHAM. We apply DHAM to model systems, a chemical reaction in water treated using quantum-mechanics/molecular-mechanics (QM/MM) simulations, and the Na(+) ion passage through the membrane-embedded ion channel GLIC. We find that DHAM gives accurate free energies even in cases where WHAM fails. In addition, DHAM provides kinetic information, which we here use to assess the extent of convergence in each of the simulation windows. DHAM may also prove useful in the construction of Markov state models from biased simulations in phase-space regions with otherwise low population.

  13. Spectroradiometer Intercomparison and Impact on Characterizing Photovoltaic Device Performance: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, A.; Andreas, A.; Ottoson, L.

    2014-11-01

    Indoor and outdoor testing of photovoltaic (PV) device performance requires the use of solar simulators and natural solar radiation, respectively. This performance characterization requires accurate knowledge of spectral irradiance distribution that is incident on the devices. Spectroradiometers are used to measure the spectral distribution of solar simulators and solar radiation. On September 17, 2013, a global spectral irradiance intercomparison using spectroradiometers was organized by the Solar Radiation Research Laboratory (SRRL) at the National Renewable Energy Laboratory (NREL). This paper presents highlights of the results of this first intercomparison, which will help to decrease systematic inter-laboratory differences in the measurements ofmore » the outputs or efficiencies of PV devices and harmonize laboratory experimental procedures.« less

  14. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  15. Molecular Simulation of Receptor Occupancy and Tumor Penetration of an Antibody and Smaller Scaffolds: Application to Molecular Imaging.

    PubMed

    Orcutt, Kelly D; Adams, Gregory P; Wu, Anna M; Silva, Matthew D; Harwell, Catey; Hoppin, Jack; Matsumura, Manabu; Kotsuma, Masakatsu; Greenberg, Jonathan; Scott, Andrew M; Beckman, Robert A

    2017-10-01

    Competitive radiolabeled antibody imaging can determine the unlabeled intact antibody dose that fully blocks target binding but may be confounded by heterogeneous tumor penetration. We evaluated the hypothesis that smaller radiolabeled constructs can be used to more accurately evaluate tumor expressed receptors. The Krogh cylinder distributed model, including bivalent binding and variable intervessel distances, simulated distribution of smaller constructs in the presence of increasing doses of labeled antibody forms. Smaller constructs <25 kDa accessed binding sites more uniformly at large distances from blood vessels compared with larger constructs and intact antibody. These observations were consistent for different affinity and internalization characteristics of constructs. As predicted, a higher dose of unlabeled intact antibody was required to block binding to these distant receptor sites. Small radiolabeled constructs provide more accurate information on total receptor expression in tumors and reveal the need for higher antibody doses for target receptor blockade.

  16. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  17. Predictive Model for Particle Residence Time Distributions in Riser Reactors. Part 1: Model Development and Validation

    DOE PAGES

    Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...

    2017-02-28

    Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less

  18. Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning.

    PubMed

    Boas, F Edward; Srimathveeravalli, Govindarajan; Durack, Jeremy C; Kaye, Elena A; Erinjeri, Joseph P; Ziv, Etay; Maybody, Majid; Yarmohammadi, Hooman; Solomon, Stephen B

    2017-05-01

    To create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated. Ice ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1-6 cryoablation probes and 1-2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements were obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions. Average absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm. Cryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.

  19. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  20. Numerical modeling of the SNS H{sup −} ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A.; Beckwith, Kristian R. C.; Kundrapu, Madhusudhan

    Ion source rf antennas that produce H- ions can fail when plasma heating causes ablation of the insulating coating due to small structural defects such as cracks. Reducing antenna failures that reduce the operating capabilities of the Spallation Neutron Source (SNS) accelerator is one of the top priorities of the SNS H- Source Program at ORNL. Numerical modeling of ion sources can provide techniques for optimizing design in order to reduce antenna failures. There are a number of difficulties in developing accurate models of rf inductive plasmas. First, a large range of spatial and temporal scales must be resolved inmore » order to accurately capture the physics of plasma motion, including the Debye length, rf frequencies on the order of tens of MHz, simulation time scales of many hundreds of rf periods, large device sizes on tens of cm, and ion motions that are thousands of times slower than electrons. This results in large simulation domains with many computational cells for solving plasma and electromagnetic equations, short time steps, and long-duration simulations. In order to reduce the computational requirements, one can develop implicit models for both fields and particle motions (e.g. divergence-preserving ADI methods), various electrostatic models, or magnetohydrodynamic models. We have performed simulations using all three of these methods and have found that fluid models have the greatest potential for giving accurate solutions while still being fast enough to perform long timescale simulations in a reasonable amount of time. We have implemented a number of fluid models with electromagnetics using the simulation tool USim and applied them to modeling the SNS H- ion source. We found that a reduced, single-fluid MHD model with an imposed magnetic field due to the rf antenna current and the confining multi-cusp field generated increased bulk plasma velocities of > 200 m/s in the region of the antenna where ablation is often observed in the SNS source. We report here on comparisons of simulated plasma parameters and code performance using more accurate physical models, such as two-temperature extended MHD models, for both a related benchmark system describing a inductively coupled plasma reactor, and for the SNS ion source. We also present results from scaling studies for mesh generation and solvers in the USim simulation code.« less

  1. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement in the relative frame of reference; the boundary conditions for the computations were obtained from inlet flow measurements performed in the AFTRF. A complete turbine stage, including an NGV and a rotor row was simulated using the RANS solver with the SST kappa -- o turbulence model, with two different computational models for the interface between the rotating component and the stationary component. The first interface model, the circumferentially averaged mixing plane model, was solved for a fixed position of the rotor blades relative to the NGV in the stationary frame of reference. The information transferred between the NGV and rotor domains is obtained by averaging across the entire interface. The quasi-steady state flow characteristics of the AFTRF can be obtained from this interface model. After the model was validated with the existing experimental data, this model was not only used to investigate the flow characteristics in the turbine stage but also the effects of using pressure side rotor tip extensions. The tip leakage flow fields simulated from this model and from the linear cascade model show similar trends. More detailed understanding of unsteady characteristics of a turbine flow field can be obtained using the second type of interface model, the time accurate sliding mesh model. The potential flow interactions, wake characteristics, their effects on secondary flow formation, and the wake mixing process in a rotor passage were examined using this model. Furthermore, turbine stage efficiency and effects of tip clearance height on the turbine stage efficiency were also investigated. A comparison between the results from the circumferential average model and the time accurate flow model results is presented. It was found that the circumferential average model cannot accurately simulate flow interaction characteristics on the interface plane between the NGV trailing edge and the rotor leading edge. However, the circumferential average model does give accurate flow characteristics in the NGV domain and the rotor domain with less computational time and computer memory requirements. In contrast, the time accurate flow simulation can predict all unsteady flow characteristics occurring in the turbine stage, but with high computational resource requirements. (Abstract shortened by UMI.)

  2. Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey

    2012-01-01

    Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254

  3. Suited versus unsuited analog astronaut performance using the Aouda.X space suit simulator: the DELTA experiment of MARS2013.

    PubMed

    Soucek, Alexander; Ostkamp, Lutz; Paternesi, Roberta

    2015-04-01

    Space suit simulators are used for extravehicular activities (EVAs) during Mars analog missions. Flight planning and EVA productivity require accurate time estimates of activities to be performed with such simulators, such as experiment execution or traverse walking. We present a benchmarking methodology for the Aouda.X space suit simulator of the Austrian Space Forum. By measuring and comparing the times needed to perform a set of 10 test activities with and without Aouda.X, an average time delay was derived in the form of a multiplicative factor. This statistical value (a second-over-second time ratio) is 1.30 and shows that operations in Aouda.X take on average a third longer than the same operations without the suit. We also show that activities predominantly requiring fine motor skills are associated with larger time delays (between 1.17 and 1.59) than those requiring short-distance locomotion or short-term muscle strain (between 1.10 and 1.16). The results of the DELTA experiment performed during the MARS2013 field mission increase analog mission planning reliability and thus EVA efficiency and productivity when using Aouda.X.

  4. Simulations of eddy kinetic energy transport in barotropic turbulence

    NASA Astrophysics Data System (ADS)

    Grooms, Ian

    2017-11-01

    Eddy energy transport in rotating two-dimensional turbulence is investigated using numerical simulation. Stochastic forcing is used to generate an inhomogeneous field of turbulence and the time-mean energy profile is diagnosed. An advective-diffusive model for the transport is fit to the simulation data by requiring the model to accurately predict the observed time-mean energy distribution. Isotropic harmonic diffusion of energy is found to be an accurate model in the case of uniform, solid-body background rotation (the f plane), with a diffusivity that scales reasonably well with a mixing-length law κ ∝V ℓ , where V and ℓ are characteristic eddy velocity and length scales. Passive tracer dynamics are added and it is found that the energy diffusivity is 75 % of the tracer diffusivity. The addition of a differential background rotation with constant vorticity gradient β leads to significant changes to the energy transport. The eddies generate and interact with a mean flow that advects the eddy energy. Mean advection plus anisotropic diffusion (with reduced diffusivity in the direction of the background vorticity gradient) is moderately accurate for flows with scale separation between the eddies and mean flow, but anisotropic diffusion becomes a much less accurate model of the transport when scale separation breaks down. Finally, it is observed that the time-mean eddy energy does not look like the actual eddy energy distribution at any instant of time. In the future, stochastic models of the eddy energy transport may prove more useful than models of the mean transport for predicting realistic eddy energy distributions.

  5. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE PAGES

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...

    2017-07-12

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  6. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-03-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  7. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  8. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-06-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  9. Self-consistent core-pedestal transport simulations with neural network accelerated models

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.

    2017-08-01

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.

  10. Identification of a thermo-elasto-viscoplastic behavior law for the simulation of thermoforming of high impact polystyrene

    NASA Astrophysics Data System (ADS)

    Atmani, O.; Abbès, B.; Abbès, F.; Li, Y. M.; Batkam, S.

    2018-05-01

    Thermoforming of high impact polystyrene sheets (HIPS) requires technical knowledge on material behavior, mold type, mold material, and process variables. Accurate thermoforming simulations are needed in the optimization process. Determining the behavior of the material under thermoforming conditions is one of the key parameters for an accurate simulation. The aim of this work is to identify the thermomechanical behavior of HIPS in the thermoforming conditions. HIPS behavior is highly dependent on temperature and strain rate. In order to reproduce the behavior of such material, a thermo-elasto-viscoplastic constitutive law was implement in the finite element code ABAQUS. The proposed model parameters are considered as thermo-dependent. The strain-dependence effect is introduced using Prony series. Tensile tests were carried out at different temperatures and strain rates. The material parameters were then identified using a NSGA-II algorithm. To validate the rheological model, experimental blowing tests were carried out on a thermoforming pilot machine. To compare the numerical results with the experimental ones the thickness distribution and the bubble shape were investigated.

  11. Wind Farm LES Simulations Using an Overset Methodology

    NASA Astrophysics Data System (ADS)

    Ananthan, Shreyas; Yellapantula, Shashank

    2017-11-01

    Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.

  12. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  13. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    PubMed

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  14. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that have resulted from this work. A review of computational aeroacoustics has recently been given by Lele.

  15. Annual Research Briefs

    NASA Technical Reports Server (NTRS)

    Spinks, Debra (Compiler)

    1997-01-01

    This report contains the 1997 annual progress reports of the research fellows and students supported by the Center for Turbulence Research (CTR). Titles include: Invariant modeling in large-eddy simulation of turbulence; Validation of large-eddy simulation in a plain asymmetric diffuser; Progress in large-eddy simulation of trailing-edge turbulence and aeronautics; Resolution requirements in large-eddy simulations of shear flows; A general theory of discrete filtering for LES in complex geometry; On the use of discrete filters for large eddy simulation; Wall models in large eddy simulation of separated flow; Perspectives for ensemble average LES; Anisotropic grid-based formulas for subgrid-scale models; Some modeling requirements for wall models in large eddy simulation; Numerical simulation of 3D turbulent boundary layers using the V2F model; Accurate modeling of impinging jet heat transfer; Application of turbulence models to high-lift airfoils; Advances in structure-based turbulence modeling; Incorporating realistic chemistry into direct numerical simulations of turbulent non-premixed combustion; Effects of small-scale structure on turbulent mixing; Turbulent premixed combustion in the laminar flamelet and the thin reaction zone regime; Large eddy simulation of combustion instabilities in turbulent premixed burners; On the generation of vorticity at a free-surface; Active control of turbulent channel flow; A generalized framework for robust control in fluid mechanics; Combined immersed-boundary/B-spline methods for simulations of flow in complex geometries; and DNS of shock boundary-layer interaction - preliminary results for compression ramp flow.

  16. Accurate ensemble molecular dynamics binding free energy ranking of multidrug-resistant HIV-1 proteases.

    PubMed

    Sadiq, S Kashif; Wright, David W; Kenway, Owain A; Coveney, Peter V

    2010-05-24

    Accurate calculation of important thermodynamic properties, such as macromolecular binding free energies, is one of the principal goals of molecular dynamics simulations. However, single long simulation frequently produces incorrectly converged quantitative results due to inadequate sampling of conformational space in a feasible wall-clock time. Multiple short (ensemble) simulations have been shown to explore conformational space more effectively than single long simulations, but the two methods have not yet been thermodynamically compared. Here we show that, for end-state binding free energy determination methods, ensemble simulations exhibit significantly enhanced thermodynamic sampling over single long simulations and result in accurate and converged relative binding free energies that are reproducible to within 0.5 kcal/mol. Completely correct ranking is obtained for six HIV-1 protease variants bound to lopinavir with a correlation coefficient of 0.89 and a mean relative deviation from experiment of 0.9 kcal/mol. Multidrug resistance to lopinavir is enthalpically driven and increases through a decrease in the protein-ligand van der Waals interaction, principally due to the V82A/I84V mutation, and an increase in net electrostatic repulsion due to water-mediated disruption of protein-ligand interactions in the catalytic region. Furthermore, we correctly rank, to within 1 kcal/mol of experiment, the substantially increased chemical potency of lopinavir binding to the wild-type protease compared to saquinavir and show that lopinavir takes advantage of a decreased net electrostatic repulsion to confer enhanced binding. Our approach is dependent on the combined use of petascale computing resources and on an automated simulation workflow to attain the required level of sampling and turn around time to obtain the results, which can be as little as three days. This level of performance promotes integration of such methodology with clinical decision support systems for the optimization of patient-specific therapy.

  17. Analysis of non-destructive current simulators of flux compression generators.

    PubMed

    O'Connor, K A; Curry, R D

    2014-06-01

    Development and evaluation of power conditioning systems and high power microwave components often used with flux compression generators (FCGs) requires repeated testing and characterization. In an effort to minimize the cost and time required for testing with explosive generators, non-destructive simulators of an FCG's output current have been developed. Flux compression generators and simulators of FCGs are unique pulsed power sources in that the current waveform exhibits a quasi-exponential increasing rate at which the current rises. Accurately reproducing the quasi-exponential current waveform of a FCG can be important in designing electroexplosive opening switches and other power conditioning components that are dependent on the integral of current action and the rate of energy dissipation. Three versions of FCG simulators have been developed that include an inductive network with decreasing impedance in time. A primary difference between these simulators is the voltage source driving them. It is shown that a capacitor-inductor-capacitor network driving a constant or decreasing inductive load can produce the desired high-order derivatives of the load current to replicate a quasi-exponential waveform. The operation of the FCG simulators is reviewed and described mathematically for the first time to aid in the design of new simulators. Experimental and calculated results of two recent simulators are reported with recommendations for future designs.

  18. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach numbers of 1.5 and 2.1 are considered. Reynolds number in the simulations was about a million. Our numerical model is based on the 2-4 scheme by Gottlieb & Turkel. Bayliss et al. applied the 2-4 scheme in boundary layer computations. This scheme was also used by Ragab and Sheen to study the nonlinear development of supersonic instability waves in a mixing layer. In this study, we present two dimensional direct simulation results for both plane and axisymmetric jets. These results are compared with linear theory predictions. These computations were made for near nozzle exit region and velocity in spanwise/azimuthal direction was assumed to be zero.

  19. Numerical Simulation of the 2004 Indian Ocean Tsunami: Accurate Flooding and drying in Banda Aceh

    NASA Astrophysics Data System (ADS)

    Cui, Haiyang; Pietrzak, Julie; Stelling, Guus; Androsov, Alexey; Harig, Sven

    2010-05-01

    The Indian Ocean Tsunami on December 26, 2004 caused one of the largest tsunamis in recent times and led to widespread devastation and loss of life. One of the worst hit regions was Banda Aceh, which is the capital of the Aceh province, located in the northern part of Sumatra, 150km from the source of the earthquake. A German-Indonesian Tsunami Early Warning System (GITEWS) (www.gitews.de) is currently under active development. The work presented here is carried out within the GITEWS framework. One of the aims of this project is the development of accurate models with which to simulate the propagation, flooding and drying, and run-up of a tsunami. In this context, TsunAWI has been developed by the Alfred Wegener Institute; it is an explicit, () finite element model. However, the accurate numerical simulation of flooding and drying requires the conservation of mass and momentum. This is not possible in the current version of TsunAWi. The P1NC - P1element guarantees mass conservation in a global sense, yet as we show here it is important to guarantee mass conservation at the local level, that is within each individual cell. Here an unstructured grid, finite volume ocean model is presented. It is derived from the P1NC - P1 element, and is shown to be mass and momentum conserving. Then a number of simulations are presented, including dam break problems flooding over both a wet and a dry bed. Excellent agreement is found. Then we present simulations for Banda Aceh, and compare the results to on-site survey data, as well as to results from the original TsunAWI code.

  20. Modelling approaches: the case of schizophrenia.

    PubMed

    Heeg, Bart M S; Damen, Joep; Buskens, Erik; Caleo, Sue; de Charro, Frank; van Hout, Ben A

    2008-01-01

    Schizophrenia is a chronic disease characterized by periods of relative stability interrupted by acute episodes (or relapses). The course of the disease may vary considerably between patients. Patient histories show considerable inter- and even intra-individual variability. We provide a critical assessment of the advantages and disadvantages of three modelling techniques that have been used in schizophrenia: decision trees, (cohort and micro-simulation) Markov models and discrete event simulation models. These modelling techniques are compared in terms of building time, data requirements, medico-scientific experience, simulation time, clinical representation, and their ability to deal with patient heterogeneity, the timing of events, prior events, patient interaction, interaction between co-variates and variability (first-order uncertainty). We note that, depending on the research question, the optimal modelling approach should be selected based on the expected differences between the comparators, the number of co-variates, the number of patient subgroups, the interactions between co-variates, and simulation time. Finally, it is argued that in case micro-simulation is required for the cost-effectiveness analysis of schizophrenia treatments, a discrete event simulation model is best suited to accurately capture all of the relevant interdependencies in this chronic, highly heterogeneous disease with limited long-term follow-up data.

  1. THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Habib, Salman; Biswas, Rahul

    2016-04-01

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  2. The mira-titan universe. Precision predictions for dark energy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Bingham, Derek; Lawrence, Earl

    2016-03-28

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  3. Fast Simulation of Electromagnetic Showers in the ATLAS Calorimeter: Frozen Showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barberio, E.; /Melbourne U.; Boudreau, J.

    2011-11-29

    One of the most time consuming process simulating pp interactions in the ATLAS detector at LHC is the simulation of electromagnetic showers in the calorimeter. In order to speed up the event simulation several parametrisation methods are available in ATLAS. In this paper we present a short description of a frozen shower technique, together with some recent benchmarks and comparison with full simulation. An expected high rate of proton-proton collisions in ATLAS detector at LHC requires large samples of simulated events (Monte Carlo) to study various physics processes. A detailed simulation of particle reactions ('full simulation') in the ATLAS detectormore » is based on GEANT4 and is very accurate. However, due to complexity of the detector, high particle multiplicity and GEANT4 itself, the average CPU time spend to simulate typical QCD event in pp collision is 20 or more minutes for modern computers. During detector simulation the largest time is spend in the calorimeters (up to 70%) most of which is required for electromagnetic particles in the electromagnetic (EM) part of the calorimeters. This is the motivation for fast simulation approaches which reduce the simulation time without affecting the accuracy. Several of fast simulation methods available within the ATLAS simulation framework (standard Athena based simulation program) are discussed here with the focus on the novel frozen shower library (FS) technique. The results obtained with FS are presented here as well.« less

  4. SimBA: simulation algorithm to fit extant-population distributions.

    PubMed

    Parida, Laxmi; Haiminen, Niina

    2015-03-14

    Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .

  5. Computational Aerothermodynamics in Aeroassist Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    Aeroassisted planetary entry uses atmospheric drag to decelerate spacecraft from super-orbital to orbital or suborbital velocities. Numerical simulation of flow fields surrounding these spacecraft during hypersonic atmospheric entry is required to define aerothermal loads. The severe compression in the shock layer in front of the vehicle and subsequent, rapid expansion into the wake are characterized by high temperature, thermo-chemical nonequilibrium processes. Implicit algorithms required for efficient, stable computation of the governing equations involving disparate time scales of convection, diffusion, chemical reactions, and thermal relaxation are discussed. Robust point-implicit strategies are utilized in the initialization phase; less robust but more efficient line-implicit strategies are applied in the endgame. Applications to ballutes (balloon-like decelerators) in the atmospheres of Venus, Mars, Titan, Saturn, and Neptune and a Mars Sample Return Orbiter (MSRO) are featured. Examples are discussed where time-accurate simulation is required to achieve a steady-state solution.

  6. An experimental method for the assessment of color simulation tools.

    PubMed

    Lillo, Julio; Alvaro, Leticia; Moreira, Humberto

    2014-07-22

    The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method. © 2014 ARVO.

  7. Web interface for Brownian dynamics simulation of ion transport and its applications to beta-barrel pores.

    PubMed

    Lee, Kyu Il; Jo, Sunhwan; Rui, Huan; Egwolf, Bernhard; Roux, Benoît; Pastor, Richard W; Im, Wonpil

    2012-01-30

    Brownian dynamics (BD) based on accurate potential of mean force is an efficient and accurate method for simulating ion transport through wide ion channels. Here, a web-based graphical user interface (GUI) is presented for carrying out grand canonical Monte Carlo (GCMC) BD simulations of channel proteins: http://www.charmm-gui.org/input/gcmcbd. The webserver is designed to help users avoid most of the technical difficulties and issues encountered in setting up and simulating complex pore systems. GCMC/BD simulation results for three proteins, the voltage dependent anion channel (VDAC), α-Hemolysin (α-HL), and the protective antigen pore of the anthrax toxin (PA), are presented to illustrate the system setup, input preparation, and typical output (conductance, ion density profile, ion selectivity, and ion asymmetry). Two models for the input diffusion constants for potassium and chloride ions in the pore are compared: scaling of the bulk diffusion constants by 0.5, as deduced from previous all-atom molecular dynamics simulations of VDAC, and a hydrodynamics based model (HD) of diffusion through a tube. The HD model yields excellent agreement with experimental conductances for VDAC and α-HL, while scaling bulk diffusion constants by 0.5 leads to underestimates of 10-20%. For PA, simulated ion conduction values overestimate experimental values by a factor of 1.5-7 (depending on His protonation state and the transmembrane potential), implying that the currently available computational model of this protein requires further structural refinement. Copyright © 2011 Wiley Periodicals, Inc.

  8. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong, E-mail: yidong.xia@inl.gov; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the Hydra-TH code. -- Highlights: •We performed a comprehensive study to verify and validate the turbulence models in Hydra-TH. •Hydra-TH delivers 2nd-order grid convergence for the incompressible Navier–Stokes equations. •Hydra-TH can accurately simulate the laminar boundary layers. •Hydra-TH can accurately simulate the turbulent boundary layers with RANS turbulence models. •Hydra-TH delivers high-fidelity LES capability for simulating turbulent flows in confined space.« less

  9. Modeling of molecular nitrogen collisions and dissociation processes for direct simulation Monte Carlo.

    PubMed

    Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong

    2014-12-21

    The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.

  10. Precipitation of energetic neutral atoms and induced non-thermal escape fluxes from the Martian atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewkow, N. R.; Kharchenko, V.

    2014-08-01

    The precipitation of energetic neutral atoms, produced through charge exchange collisions between solar wind ions and thermal atmospheric gases, is investigated for the Martian atmosphere. Connections between parameters of precipitating fast ions and resulting escape fluxes, altitude-dependent energy distributions of fast atoms and their coefficients of reflection from the Mars atmosphere, are established using accurate cross sections in Monte Carlo (MC) simulations. Distributions of secondary hot (SH) atoms and molecules, induced by precipitating particles, have been obtained and applied for computations of the non-thermal escape fluxes. A new collisional database on accurate energy-angular-dependent cross sections, required for description of themore » energy-momentum transfer in collisions of precipitating particles and production of non-thermal atmospheric atoms and molecules, is reported with analytic fitting equations. Three-dimensional MC simulations with accurate energy-angular-dependent cross sections have been carried out to track large ensembles of energetic atoms in a time-dependent manner as they propagate into the Martian atmosphere and transfer their energy to the ambient atoms and molecules. Results of the MC simulations on the energy-deposition altitude profiles, reflection coefficients, and time-dependent atmospheric heating, obtained for the isotropic hard sphere and anisotropic quantum cross sections, are compared. Atmospheric heating rates, thermalization depths, altitude profiles of production rates, energy distributions of SH atoms and molecules, and induced escape fluxes have been determined.« less

  11. A Systems Approach to Scalable Transportation Network Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2006-01-01

    Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less

  12. Development and evaluation of packet video schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Hadenfeldt, A. C.

    1990-01-01

    Reflecting the two tasks proposed for the current year, namely a feasibility study of simulating the NASA network, and a study of progressive transmission schemes, are presented. The view of the NASA network, gleaned from the various technical reports made available to use, is provided. Also included is a brief overview of how the current simulator could be modified to accomplish the goal of simulating the NASA network. As the material in this section would be the basis for the actual simulation, it is important to make sure that it is an accurate reflection of the requirements on the simulator. Brief descriptions of the set of progressive transmission algorithms selected for the study are contained. The results available in the literature were obtained under a variety of different assumptions, not all of which are stated. As such, the only way to compare the efficiency and the implementational complexity of the various algorithms is to simulate them.

  13. Sensorless Modeling of Varying Pulse Width Modulator Resolutions in Three-Phase Induction Motors

    PubMed Central

    Marko, Matthew David; Shevach, Glenn

    2017-01-01

    A sensorless algorithm was developed to predict rotor speeds in an electric three-phase induction motor. This sensorless model requires a measurement of the stator currents and voltages, and the rotor speed is predicted accurately without any mechanical measurement of the rotor speed. A model of an electric vehicle undergoing acceleration was built, and the sensorless prediction of the simulation rotor speed was determined to be robust even in the presence of fluctuating motor parameters and significant sensor errors. Studies were conducted for varying pulse width modulator resolutions, and the sensorless model was accurate for all resolutions of sinusoidal voltage functions. PMID:28076418

  14. Sensorless Modeling of Varying Pulse Width Modulator Resolutions in Three-Phase Induction Motors.

    PubMed

    Marko, Matthew David; Shevach, Glenn

    2017-01-01

    A sensorless algorithm was developed to predict rotor speeds in an electric three-phase induction motor. This sensorless model requires a measurement of the stator currents and voltages, and the rotor speed is predicted accurately without any mechanical measurement of the rotor speed. A model of an electric vehicle undergoing acceleration was built, and the sensorless prediction of the simulation rotor speed was determined to be robust even in the presence of fluctuating motor parameters and significant sensor errors. Studies were conducted for varying pulse width modulator resolutions, and the sensorless model was accurate for all resolutions of sinusoidal voltage functions.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degenhardt, R.; PFH, Private University of Applied Sciences Goettingen, Composite Engineering Campus Stade; Araujo, F. C. de

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. Formore » unstiffened cylindrical composite shells a proposal for a new design method is presented.« less

  16. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    NASA Astrophysics Data System (ADS)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  17. Joint Spatial-Spectral Reconstruction and k-t Spirals for Accelerated 2D Spatial/1D Spectral Imaging of 13C Dynamics

    PubMed Central

    Gordon, Jeremy W.; Niles, David J.; Fain, Sean B.; Johnson, Kevin M.

    2014-01-01

    Purpose To develop a novel imaging technique to reduce the number of excitations and required scan time for hyperpolarized 13C imaging. Methods A least-squares based optimization and reconstruction is developed to simultaneously solve for both spatial and spectral encoding. By jointly solving both domains, spectral imaging can potentially be performed with a spatially oversampled single echo spiral acquisition. Digital simulations, phantom experiments, and initial in vivo hyperpolarized [1-13C]pyruvate experiments were performed to assess the performance of the algorithm as compared to a multi-echo approach. Results Simulations and phantom data indicate that accurate single echo imaging is possible when coupled with oversampling factors greater than six (corresponding to a worst case of pyruvate to metabolite ratio < 9%), even in situations of substantial T2* decay and B0 heterogeneity. With lower oversampling rates, two echoes are required for similar accuracy. These results were confirmed with in vivo data experiments, showing accurate single echo spectral imaging with an oversampling factor of 7 and two echo imaging with an oversampling factor of 4. Conclusion The proposed k-t approach increases data acquisition efficiency by reducing the number of echoes required to generate spectroscopic images, thereby allowing accelerated acquisition speed, preserved polarization, and/or improved temporal or spatial resolution. Magn Reson Med PMID:23716402

  18. Resolved motion rate and resolved acceleration servo-control of wheeled mobile robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, P.F.; Neuman, C.P.; Carnegie-Mellon Univ., Pittsburgh, PA

    1989-01-01

    Accurate motion control of wheeled mobile robots (WMRs) is required for their application to autonomous, semi-autonomous and teleoperated tasks. The similarities between WMRs and stationary manipulators suggest that current, successful, model-based manipulator control algorithms may be applied to WMRs. Special characteristics of WMRs including higher-pairs, closed-chains, friction and unactuated and unsensed joints require innovative modeling methodologies. The WMR modeling challenge has been recently overcome, thus enabling the application of manipulator control algorithms to WMRs. This realization lays the foundation for significant technology transfer from manipulator control to WMR control. We apply two Cartesian-space manipulator control algorithms: resolved motion rate (kinematics-based)more » and resolved acceleration (dynamics-based) control to WMR servo-control. We evaluate simulation studies of two exemplary WMRs: Uranus (a three degree-of-freedom WMR constructed at Carnegie Mellon University), and Bicsun-Bicas (a two degree-of-freedom WMR being constructed at Sandia National Laboratories) under the control of these algorithms. Although resolved motion rate servo-control is adequate for the control of Uranus, resolved acceleration servo-control is required for the control of the mechanically simpler Bicsun-Bicas because it exhibits more dynamic coupling and nonlinearities. Successful accurate motion control of these WMRs in simulation is driving current experimental research studies. 18 refs., 7 figs., 5 tabs.« less

  19. Absorption Refrigeration Cycles with Ammonia-Ionic Liquid Working Pairs Studied by Molecular Simulation.

    PubMed

    Becker, Tim M; Wang, Meng; Kabra, Abhishek; Jamali, Seyed Hossein; Ramdin, Mahinder; Dubbeldam, David; Infante Ferreira, Carlos A; Vlugt, Thijs J H

    2018-04-18

    For absorption refrigeration, it has been shown that ionic liquids have the potential to replace conventional working pairs. Due to the huge number of possibilities, conducting lab experiments to find the optimal ionic liquid is infeasible. Here, we provide a proof-of-principle study of an alternative computational approach. The required thermodynamic properties, i.e., solubility, heat capacity, and heat of absorption, are determined via molecular simulations. These properties are used in a model of the absorption refrigeration cycle to estimate the circulation ratio and the coefficient of performance. We selected two ionic liquids as absorbents: [emim][Tf 2 N], and [emim][SCN]. As refrigerant NH 3 was chosen due to its favorable operating range. The results are compared to the traditional approach in which parameters of a thermodynamic model are fitted to reproduce experimental data. The work shows that simulations can be used to predict the required thermodynamic properties to estimate the performance of absorption refrigeration cycles. However, high-quality force fields are required to accurately predict the cycle performance.

  20. Absorption Refrigeration Cycles with Ammonia–Ionic Liquid Working Pairs Studied by Molecular Simulation

    PubMed Central

    2018-01-01

    For absorption refrigeration, it has been shown that ionic liquids have the potential to replace conventional working pairs. Due to the huge number of possibilities, conducting lab experiments to find the optimal ionic liquid is infeasible. Here, we provide a proof-of-principle study of an alternative computational approach. The required thermodynamic properties, i.e., solubility, heat capacity, and heat of absorption, are determined via molecular simulations. These properties are used in a model of the absorption refrigeration cycle to estimate the circulation ratio and the coefficient of performance. We selected two ionic liquids as absorbents: [emim][Tf2N], and [emim][SCN]. As refrigerant NH3 was chosen due to its favorable operating range. The results are compared to the traditional approach in which parameters of a thermodynamic model are fitted to reproduce experimental data. The work shows that simulations can be used to predict the required thermodynamic properties to estimate the performance of absorption refrigeration cycles. However, high-quality force fields are required to accurately predict the cycle performance. PMID:29749996

  1. Implementing a modeling software for animated protein-complex interactions using a physics simulation library.

    PubMed

    Ueno, Yutaka; Ito, Shuntaro; Konagaya, Akihiko

    2014-12-01

    To better understand the behaviors and structural dynamics of proteins within a cell, novel software tools are being developed that can create molecular animations based on the findings of structural biology. This study proposes our method developed based on our prototypes to detect collisions and examine the soft-body dynamics of molecular models. The code was implemented with a software development toolkit for rigid-body dynamics simulation and a three-dimensional graphics library. The essential functions of the target software system included the basic molecular modeling environment, collision detection in the molecular models, and physical simulations of the movement of the model. Taking advantage of recent software technologies such as physics simulation modules and interpreted scripting language, the functions required for accurate and meaningful molecular animation were implemented efficiently.

  2. Discovering mechanisms relevant for radiation damage evolution

    DOE PAGES

    Uberuaga, Blas Pedro; Martinez, Enrique Saez; Perez, Danny; ...

    2018-02-22

    he response of a material to irradiation is a consequence of the kinetic evolution of defects produced during energetic damage events. Thus, accurate predictions of radiation damage evolution require knowing the atomic scale mechanisms associated with those defects. Atomistic simulations are a key tool in providing insight into the types of mechanisms possible. Further, by extending the time scale beyond what is achievable with conventional molecular dynamics, even greater insight can be obtained. Here, we provide examples in which such simulations have revealed new kinetic mechanisms that were not obvious before performing the simulations. We also demonstrate, through the couplingmore » with higher level models, how those mechanisms impact experimental observables in irradiated materials. Lastly, we discuss the importance of these types of simulations in the context of predicting material behavior.« less

  3. CatSim: a new computer assisted tomography simulation environment

    NASA Astrophysics Data System (ADS)

    De Man, Bruno; Basu, Samit; Chandra, Naveen; Dunham, Bruce; Edic, Peter; Iatrou, Maria; McOlash, Scott; Sainath, Paavana; Shaughnessy, Charlie; Tower, Brendon; Williams, Eugene

    2007-03-01

    We present a new simulation environment for X-ray computed tomography, called CatSim. CatSim provides a research platform for GE researchers and collaborators to explore new reconstruction algorithms, CT architectures, and X-ray source or detector technologies. The main requirements for this simulator are accurate physics modeling, low computation times, and geometrical flexibility. CatSim allows simulating complex analytic phantoms, such as the FORBILD phantoms, including boxes, ellipsoids, elliptical cylinders, cones, and cut planes. CatSim incorporates polychromaticity, realistic quantum and electronic noise models, finite focal spot size and shape, finite detector cell size, detector cross-talk, detector lag or afterglow, bowtie filtration, finite detector efficiency, non-linear partial volume, scatter (variance-reduced Monte Carlo), and absorbed dose. We present an overview of CatSim along with a number of validation experiments.

  4. Discovering mechanisms relevant for radiation damage evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uberuaga, Blas Pedro; Martinez, Enrique Saez; Perez, Danny

    he response of a material to irradiation is a consequence of the kinetic evolution of defects produced during energetic damage events. Thus, accurate predictions of radiation damage evolution require knowing the atomic scale mechanisms associated with those defects. Atomistic simulations are a key tool in providing insight into the types of mechanisms possible. Further, by extending the time scale beyond what is achievable with conventional molecular dynamics, even greater insight can be obtained. Here, we provide examples in which such simulations have revealed new kinetic mechanisms that were not obvious before performing the simulations. We also demonstrate, through the couplingmore » with higher level models, how those mechanisms impact experimental observables in irradiated materials. Lastly, we discuss the importance of these types of simulations in the context of predicting material behavior.« less

  5. An integrated modeling and design tool for advanced optical spacecraft

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    1992-01-01

    Consideration is given to the design and status of the Integrated Modeling of Optical Systems (IMOS) tool and to critical design issues. A multidisciplinary spacecraft design and analysis tool with support for structural dynamics, controls, thermal analysis, and optics, IMOS provides rapid and accurate end-to-end performance analysis, simulations, and optimization of advanced space-based optical systems. The requirements for IMOS-supported numerical arrays, user defined data structures, and a hierarchical data base are outlined, and initial experience with the tool is summarized. A simulation of a flexible telescope illustrates the integrated nature of the tools.

  6. Simulating supersymmetry at the SSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, R.M.; Haber, H.E.

    1984-08-01

    Careful study of supersymmetric signatures at the SSC is required in order to distinguish them from Standard Model physics backgrounds. To this end, we have created an efficient, accurate computer program which simulates supersymmetric particle production and decay (or other new particles). We have incorporated the full matrix elements, keeping track of the polarizations of all intermediate states. (At this time hadronization of final-state partons is ignored). Using Monte Carlo techniques this program can generate any desired final-state distribution or individual events for Lego plots. Examples of the results of our study of supersymmetry at SSC are provided.

  7. Subgrid or Reynolds stress-modeling for three-dimensional turbulence computations

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.

    1975-01-01

    A review is given of recent advances in two distinct computational methods for evaluating turbulence fields, namely, statistical Reynolds stress modeling and turbulence simulation, where large eddies are followed in time. It is shown that evaluation of the mean Reynolds stresses, rather than use of a scalar eddy viscosity, permits an explanation of streamline curvature effects found in several experiments. Turbulence simulation, with a new volume averaging technique and third-order accurate finite-difference computing is shown to predict the decay of isotropic turbulence in incompressible flow with rather modest computer storage requirements, even at Reynolds numbers of aerodynamic interest.

  8. Characterizing rare-event property distributions via replicate molecular dynamics simulations of proteins.

    PubMed

    Krishnan, Ranjani; Walton, Emily B; Van Vliet, Krystyn J

    2009-11-01

    As computational resources increase, molecular dynamics simulations of biomolecules are becoming an increasingly informative complement to experimental studies. In particular, it has now become feasible to use multiple initial molecular configurations to generate an ensemble of replicate production-run simulations that allows for more complete characterization of rare events such as ligand-receptor unbinding. However, there are currently no explicit guidelines for selecting an ensemble of initial configurations for replicate simulations. Here, we use clustering analysis and steered molecular dynamics simulations to demonstrate that the configurational changes accessible in molecular dynamics simulations of biomolecules do not necessarily correlate with observed rare-event properties. This informs selection of a representative set of initial configurations. We also employ statistical analysis to identify the minimum number of replicate simulations required to sufficiently sample a given biomolecular property distribution. Together, these results suggest a general procedure for generating an ensemble of replicate simulations that will maximize accurate characterization of rare-event property distributions in biomolecules.

  9. Computational considerations for the simulation of shock-induced sound

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Carpenter, Mark H.

    1996-01-01

    The numerical study of aeroacoustic problems places stringent demands on the choice of a computational algorithm, because it requires the ability to propagate disturbances of small amplitude and short wavelength. The demands are particularly high when shock waves are involved, because the chosen algorithm must also resolve discontinuities in the solution. The extent to which a high-order-accurate shock-capturing method can be relied upon for aeroacoustics applications that involve the interaction of shocks with other waves has not been previously quantified. Such a study is initiated in this work. A fourth-order-accurate essentially nonoscillatory (ENO) method is used to investigate the solutions of inviscid, compressible flows with shocks in a quasi-one-dimensional nozzle flow. The design order of accuracy is achieved in the smooth regions of a steady-state test case. However, in an unsteady test case, only first-order results are obtained downstream of a sound-shock interaction. The difficulty in obtaining a globally high-order-accurate solution in such a case with a shock-capturing method is demonstrated through the study of a simplified, linear model problem. Some of the difficult issues and ramifications for aeroacoustics simulations of flows with shocks that are raised by these results are discussed.

  10. Accounting for receptor flexibility and enhanced sampling methods in computer-aided drug design.

    PubMed

    Sinko, William; Lindert, Steffen; McCammon, J Andrew

    2013-01-01

    Protein flexibility plays a major role in biomolecular recognition. In many cases, it is not obvious how molecular structure will change upon association with other molecules. In proteins, these changes can be major, with large deviations in overall backbone structure, or they can be more subtle as in a side-chain rotation. Either way the algorithms that predict the favorability of biomolecular association require relatively accurate predictions of the bound structure to give an accurate assessment of the energy involved in association. Here, we review a number of techniques that have been proposed to accommodate receptor flexibility in the simulation of small molecules binding to protein receptors. We investigate modifications to standard rigid receptor docking algorithms and also explore enhanced sampling techniques, and the combination of free energy calculations and enhanced sampling techniques. The understanding and allowance for receptor flexibility are helping to make computer simulations of ligand protein binding more accurate. These developments may help improve the efficiency of drug discovery and development. Efficiency will be essential as we begin to see personalized medicine tailored to individual patients, which means specific drugs are needed for each patient's genetic makeup. © 2012 John Wiley & Sons A/S.

  11. Implementation of interconnect simulation tools in spice

    NASA Technical Reports Server (NTRS)

    Satsangi, H.; Schutt-Aine, J. E.

    1993-01-01

    Accurate computer simulation of high speed digital computer circuits and communication circuits requires a multimode approach to simulate both the devices and the interconnects between devices. Classical circuit analysis algorithms (lumped parameter) are needed for circuit devices and the network formed by the interconnected devices. The interconnects, however, have to be modeled as transmission lines which incorporate electromagnetic field analysis. An approach to writing a multimode simulator is to take an existing software package which performs either lumped parameter analysis or field analysis and add the missing type of analysis routines to the package. In this work a traditionally lumped parameter simulator, SPICE, is modified so that it will perform lossy transmission line analysis using a different model approach. Modifying SPICE3E2 or any other large software package is not a trivial task. An understanding of the programming conventions used, simulation software, and simulation algorithms is required. This thesis was written to clarify the procedure for installing a device into SPICE3E2. The installation of three devices is documented and the installations of the first two provide a foundation for installation of the lossy line which is the third device. The details of discussions are specific to SPICE, but the concepts will be helpful when performing installations into other circuit analysis packages.

  12. Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boas, F. Edward, E-mail: boasf@mskcc.org; Srimathveeravalli, Govindarajan, E-mail: srimaths@mskcc.org; Durack, Jeremy C., E-mail: durackj@mskcc.org

    PurposeTo create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated.Materials and MethodsIce ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1–6 cryoablation probes and 1–2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements weremore » obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions.ResultsAverage absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm.ConclusionCryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.« less

  13. Implementation and evaluation of the Level Set method: Towards efficient and accurate simulation of wet etching for microengineering applications

    NASA Astrophysics Data System (ADS)

    Montoliu, C.; Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Colom, R. J.

    2013-10-01

    The use of atomistic methods, such as the Continuous Cellular Automaton (CCA), is currently regarded as a computationally efficient and experimentally accurate approach for the simulation of anisotropic etching of various substrates in the manufacture of Micro-electro-mechanical Systems (MEMS). However, when the features of the chemical process are modified, a time-consuming calibration process needs to be used to transform the new macroscopic etch rates into a corresponding set of atomistic rates. Furthermore, changing the substrate requires a labor-intensive effort to reclassify most atomistic neighborhoods. In this context, the Level Set (LS) method provides an alternative approach where the macroscopic forces affecting the front evolution are directly applied at the discrete level, thus avoiding the need for reclassification and/or calibration. Correspondingly, we present a fully-operational Sparse Field Method (SFM) implementation of the LS approach, discussing in detail the algorithm and providing a thorough characterization of the computational cost and simulation accuracy, including a comparison to the performance by the most recent CCA model. We conclude that the SFM implementation achieves similar accuracy as the CCA method with less fluctuations in the etch front and requiring roughly 4 times less memory. Although SFM can be up to 2 times slower than CCA for the simulation of anisotropic etchants, it can also be up to 10 times faster than CCA for isotropic etchants. In addition, we present a parallel, GPU-based implementation (gSFM) and compare it to an optimized, multicore CPU version (cSFM), demonstrating that the SFM algorithm can be successfully parallelized and the simulation times consequently reduced, while keeping the accuracy of the simulations. Although modern multicore CPUs provide an acceptable option, the massively parallel architecture of modern GPUs is more suitable, as reflected by computational times for gSFM up to 7.4 times faster than for cSFM.

  14. Unsteady Aerodynamic Interaction in a Closely Coupled Turbine Consistent with Contra-Rotation

    DTIC Science & Technology

    2014-08-01

    data on the blade required three instrumentation patches due to slip ring channel limitations. TRF blowdowns designated as experiments 280100...measurements from sensors on the rotating hardware due to slip ring limitations. The experimental data was compared to time-accurate simulations modeling...AFRL-RQ-WP-TR-2014-0195 UNSTEADY AERODYNAMIC INTERACTION IN A CLOSELY COUPLED TURBINE CONSISTENT WITH CONTRA-ROTATION Michael Kenneth

  15. Analysis of three-phase equilibrium conditions for methane hydrate by isometric-isothermal molecular dynamics simulations.

    PubMed

    Yuhara, Daisuke; Brumby, Paul E; Wu, David T; Sum, Amadeu K; Yasuoka, Kenji

    2018-05-14

    To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems.

  16. Analysis of three-phase equilibrium conditions for methane hydrate by isometric-isothermal molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Yuhara, Daisuke; Brumby, Paul E.; Wu, David T.; Sum, Amadeu K.; Yasuoka, Kenji

    2018-05-01

    To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems.

  17. The Development of a 3D LADAR Simulator Based on a Fast Target Impulse Response Generation Approach

    NASA Astrophysics Data System (ADS)

    Al-Temeemy, Ali Adnan

    2017-09-01

    A new laser detection and ranging (LADAR) simulator has been developed, using MATLAB and its graphical user interface, to simulate direct detection time of flight LADAR systems, and to produce 3D simulated scanning images under a wide variety of conditions. This simulator models each stage from the laser source to data generation and can be considered as an efficient simulation tool to use when developing LADAR systems and their data processing algorithms. The novel approach proposed for this simulator is to generate the actual target impulse response. This approach is fast and able to deal with high scanning requirements without losing the fidelity that accompanies increments in speed. This leads to a more efficient LADAR simulator and opens up the possibility for simulating LADAR beam propagation more accurately by using a large number of laser footprint samples. The approach is to select only the parts of the target that lie in the laser beam angular field by mathematically deriving the required equations and calculating the target angular ranges. The performance of the new simulator has been evaluated under different scanning conditions, the results showing significant increments in processing speeds in comparison to conventional approaches, which are also used in this study as a point of comparison for the results. The results also show the simulator's ability to simulate phenomena related to the scanning process, for example, type of noise, scanning resolution and laser beam width.

  18. FERN - a Java framework for stochastic simulation and evaluation of reaction networks.

    PubMed

    Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf

    2008-08-29

    Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.

  19. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  20. Computing the total atmospheric refraction for real-time optical imaging sensor simulation

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2015-05-01

    Fast and accurate computation of light path deviation due to atmospheric refraction is an important requirement for real-time simulation of optical imaging sensor systems. A large body of existing literature covers various methods for application of Snell's Law to the light path ray tracing problem. This paper provides a discussion of the adaptation to real time simulation of atmospheric refraction ray tracing techniques used in mid-1980's LOWTRAN releases. The refraction ray trace algorithm published in a LOWTRAN-6 technical report by Kneizys (et. al.) has been coded in MATLAB for development, and in C-language for simulation use. To this published algorithm we have added tuning parameters for variable path segment lengths, and extensions for Earth grazing and exoatmospheric "near Earth" ray paths. Model atmosphere properties used to exercise the refraction algorithm were obtained from tables published in another LOWTRAN-6 related report. The LOWTRAN-6 based refraction model is applicable to atmospheric propagation at wavelengths in the IR and visible bands of the electromagnetic spectrum. It has been used during the past two years by engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) in support of several advanced imaging sensor simulations. Recently, a faster (but sufficiently accurate) method using Gauss-Chebyshev Quadrature integration for evaluating the refraction integral was adopted.

  1. The influence of Monte Carlo source parameters on detector design and dose perturbation in small field dosimetry

    NASA Astrophysics Data System (ADS)

    Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.

    2014-03-01

    To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

  2. Accuracy evaluation of fluoroscopy-based 2D and 3D pose reconstruction with unicompartmental knee arthroplasty.

    PubMed

    Van Duren, B H; Pandit, H; Beard, D J; Murray, D W; Gill, H S

    2009-04-01

    The recent development in Oxford lateral unicompartmental knee arthroplasty (UKA) design requires a valid method of assessing its kinematics. In particular, the use of single plane fluoroscopy to reconstruct the 3D kinematics of the implanted knee. The method has been used previously to investigate the kinematics of UKA, but mostly it has been used in conjunction with total knee arthroplasty (TKA). However, no accuracy assessment of the method when used for UKA has previously been reported. In this study we performed computer simulation tests to investigate the effect of the different geometry of the unicompartmental implant has on the accuracy of the method in comparison to the total knee implants. A phantom was built to perform in vitro tests to determine the accuracy of the method for UKA. The computer simulations suggested that the use of the method for UKA would prove less accurate than for TKA's. The rotational degrees of freedom for the femur showed greatest disparity between the UKA and TKA. The phantom tests showed that the in-plane translations were accurate to <0.5mm RMS and the out-of-plane translations were less accurate with 4.1mm RMS. The rotational accuracies were between 0.6 degrees and 2.3 degrees which are less accurate than those reported in the literature for TKA, however, the method is sufficient for studying overall knee kinematics.

  3. Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses

    PubMed Central

    Myers, Risa B.; Herskovic, Jorge R.

    2011-01-01

    Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our Bayesian framework. Use of these probabilistic techniques will enable more accurate patient counts and better results for applications requiring this metric. PMID:21986292

  4. Material model validation for laser shock peening process simulation

    NASA Astrophysics Data System (ADS)

    Amarchinta, H. K.; Grandhi, R. V.; Langer, K.; Stargel, D. S.

    2009-01-01

    Advanced mechanical surface enhancement techniques have been used successfully to increase the fatigue life of metallic components. These techniques impart deep compressive residual stresses into the component to counter potentially damage-inducing tensile stresses generated under service loading. Laser shock peening (LSP) is an advanced mechanical surface enhancement technique used predominantly in the aircraft industry. To reduce costs and make the technique available on a large-scale basis for industrial applications, simulation of the LSP process is required. Accurate simulation of the LSP process is a challenging task, because the process has many parameters such as laser spot size, pressure profile and material model that must be precisely determined. This work focuses on investigating the appropriate material model that could be used in simulation and design. In the LSP process material is subjected to strain rates of 106 s-1, which is very high compared with conventional strain rates. The importance of an accurate material model increases because the material behaves significantly different at such high strain rates. This work investigates the effect of multiple nonlinear material models for representing the elastic-plastic behavior of materials. Elastic perfectly plastic, Johnson-Cook and Zerilli-Armstrong models are used, and the performance of each model is compared with available experimental results.

  5. Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia

    PubMed Central

    Segkouli, Sofia; Tzovaras, Dimitrios; Tsakiris, Thanos; Tsolaki, Magda; Karagiannidis, Charalampos

    2015-01-01

    Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI) such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs) requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM) simulating mild cognitive impairment (MCI) through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users' cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces' design supported by increased tasks' complexity to capture a more detailed profile of users' capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces' evaluation through simulation on the basis of virtual models of MCI users. PMID:26339282

  6. Bond breaking in epoxy systems: A combined QM/MM approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barr, Stephen A.; Ecker, Allison M.; Berry, Rajiv J., E-mail: Rajiv.Berry@us.af.mil

    2016-06-28

    A novel method to combine quantum mechanics (QM) and molecular mechanics has been developed to accurately and efficiently account for covalent bond breaking in polymer systems under high strain without the use of predetermined break locations. Use of this method will provide a better fundamental understanding of the mechano-chemical origins of fracture in thermosets. Since classical force fields cannot accurately account for bond breaking, and QM is too demanding to simulate large systems, a hybrid approach is required. In the method presented here, strain is applied to the system using a classical force field, and all bond lengths are monitored.more » When a bond is stretched past a threshold value, a zone surrounding the bond is used in a QM energy minimization to determine which, if any, bonds break. The QM results are then used to reconstitute the system to continue the classical simulation at progressively larger strain until another QM calculation is triggered. In this way, a QM calculation is only computed when and where needed, allowing for efficient simulations. A robust QM method for energy minimization has been determined, as well as appropriate values for the QM zone size and the threshold bond length. Compute times do not differ dramatically from classical molecular mechanical simulations.« less

  7. Surrogate Reservoir Model

    NASA Astrophysics Data System (ADS)

    Mohaghegh, Shahab

    2010-05-01

    Surrogate Reservoir Model (SRM) is new solution for fast track, comprehensive reservoir analysis (solving both direct and inverse problems) using existing reservoir simulation models. SRM is defined as a replica of the full field reservoir simulation model that runs and provides accurate results in real-time (one simulation run takes only a fraction of a second). SRM mimics the capabilities of a full field model with high accuracy. Reservoir simulation is the industry standard for reservoir management. It is used in all phases of field development in the oil and gas industry. The routine of simulation studies calls for integration of static and dynamic measurements into the reservoir model. Full field reservoir simulation models have become the major source of information for analysis, prediction and decision making. Large prolific fields usually go through several versions (updates) of their model. Each new version usually is a major improvement over the previous version. The updated model includes the latest available information incorporated along with adjustments that usually are the result of single-well or multi-well history matching. As the number of reservoir layers (thickness of the formations) increases, the number of cells representing the model approaches several millions. As the reservoir models grow in size, so does the time that is required for each run. Schemes such as grid computing and parallel processing helps to a certain degree but do not provide the required speed for tasks such as: field development strategies using comprehensive reservoir analysis, solving the inverse problem for injection/production optimization, quantifying uncertainties associated with the geological model and real-time optimization and decision making. These types of analyses require hundreds or thousands of runs. Furthermore, with the new push for smart fields in the oil/gas industry that is a natural growth of smart completion and smart wells, the need for real time reservoir modeling becomes more pronounced. SRM is developed using the state of the art in neural computing and fuzzy pattern recognition to address the ever growing need in the oil and gas industry to perform accurate, but high speed simulation and modeling. Unlike conventional geo-statistical approaches (response surfaces, proxy models …) that require hundreds of simulation runs for development, SRM is developed only with a few (from 10 to 30 runs) simulation runs. SRM can be developed regularly (as new versions of the full field model become available) off-line and can be put online for real-time processing to guide important decisions. SRM has proven its value in the field. An SRM was developed for a giant oil field in the Middle East. The model included about one million grid blocks with more than 165 horizontal wells and took ten hours for a single run on 12 parallel CPUs. Using only 10 simulation runs, an SRM was developed that was able to accurately mimic the behavior of the reservoir simulation model. Performing a comprehensive reservoir analysis that included making millions of SRM runs, wells in the field were divided into five clusters. It was predicted that wells in cluster one & two are best candidates for rate relaxation with minimal, long term water production while wells in clusters four and five are susceptive to high water cuts. Two and a half years and 20 wells later, rate relaxation results from the field proved that all the predictions made by the SRM analysis were correct. While incremental oil production increased in all wells (wells in clusters 1 produced the most followed by wells in cluster 2, 3 …) the percent change in average monthly water cut for wells in each cluster clearly demonstrated the analytic power of SRM. As it was correctly predicted, wells in clusters 1 and 2 actually experience a reduction in water cut while a substantial increase in water cut was observed in wells classified into clusters 4 and 5. Performing these analyses would have been impossible using the original full field simulation model.

  8. Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting

    DOE PAGES

    Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart

    2015-02-14

    Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less

  9. WE-C-217BCD-08: Rapid Monte Carlo Simulations of DQE(f) of Scintillator-Based Detectors.

    PubMed

    Star-Lack, J; Abel, E; Constantin, D; Fahrig, R; Sun, M

    2012-06-01

    Monte Carlo simulations of DQE(f) can greatly aid in the design of scintillator-based detectors by helping optimize key parameters including scintillator material and thickness, pixel size, surface finish, and septa reflectivity. However, the additional optical transport significantly increases simulation times, necessitating a large number of parallel processors to adequately explore the parameter space. To address this limitation, we have optimized the DQE(f) algorithm, reducing simulation times per design iteration to 10 minutes on a single CPU. DQE(f) is proportional to the ratio, MTF(f)̂2 /NPS(f). The LSF-MTF simulation uses a slanted line source and is rapidly performed with relatively few gammas launched. However, the conventional NPS simulation for standard radiation exposure levels requires the acquisition of multiple flood fields (nRun), each requiring billions of input gamma photons (nGamma), many of which will scintillate, thereby producing thousands of optical photons (nOpt) per deposited MeV. The resulting execution time is proportional to the product nRun x nGamma x nOpt. In this investigation, we revisit the theoretical derivation of DQE(f), and reveal significant computation time savings through the optimization of nRun, nGamma, and nOpt. Using GEANT4, we determine optimal values for these three variables for a GOS scintillator-amorphous silicon portal imager. Both isotropic and Mie optical scattering processes were modeled. Simulation results were validated against the literature. We found that, depending on the radiative and optical attenuation properties of the scintillator, the NPS can be accurately computed using values for nGamma below 1000, and values for nOpt below 500/MeV. nRun should remain above 200. Using these parameters, typical computation times for a complete NPS ranged from 2-10 minutes on a single CPU. The number of launched particles and corresponding execution times for a DQE simulation can be dramatically reduced allowing for accurate computation with modest computer hardware. NIHRO1 CA138426. Several authors work for Varian Medical Systems. © 2012 American Association of Physicists in Medicine.

  10. Extreme Environment Simulation - Current and New Capabilities to Simulate Venus and Other Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Kremic, Tibor; Vento, Dan; Lalli, Nick; Palinski, Timothy

    2014-01-01

    Science, technology, and planetary mission communities have a growing interest in components and systems that are capable of working in extreme (high) temperature and pressure conditions. Terrestrial applications range from scientific research, aerospace, defense, automotive systems, energy storage and power distribution, deep mining and others. As the target environments get increasingly extreme, capabilities to develop and test the sensors and systems designed to operate in such environments will be required. An application of particular importance to the planetary science community is the ability for a robotic lander to survive on the Venus surface where pressures are nearly 100 times that of Earth and temperatures approach 500C. The scientific importance and relevance of Venus missions are stated in the current Planetary Decadal Survey. Further, several missions to Venus were proposed in the most recent Discovery call. Despite this interest, the ability to accurately simulate Venus conditions at a scale that can test and validate instruments and spacecraft systems and accurately simulate the Venus atmosphere has been lacking. This paper discusses and compares the capabilities that are known to exist within and outside the United States to simulate the extreme environmental conditions found in terrestrial or planetary surfaces including the Venus atmosphere and surface. The paper then focuses on discussing the recent additional capability found in the NASA Glenn Extreme Environment Rig (GEER). The GEER, located at the NASA Glenn Research Center in Cleveland, Ohio, is designed to simulate not only the temperature and pressure extremes described, but can also accurately reproduce the atmospheric compositions of bodies in the solar system including those with acidic and hazardous elements. GEER capabilities and characteristics are described along with operational considerations relevant to potential users. The paper presents initial operating results and concludes with a sampling of investigations or tests that have been requested or expected.

  11. Relative performance of empirical and physical models in assessing the seasonal and annual glacier surface mass balance of Saint-Sorlin Glacier (French Alps)

    NASA Astrophysics Data System (ADS)

    Réveillet, Marion; Six, Delphine; Vincent, Christian; Rabatel, Antoine; Dumont, Marie; Lafaysse, Matthieu; Morin, Samuel; Vionnet, Vincent; Litt, Maxime

    2018-04-01

    This study focuses on simulations of the seasonal and annual surface mass balance (SMB) of Saint-Sorlin Glacier (French Alps) for the period 1996-2015 using the detailed SURFEX/ISBA-Crocus snowpack model. The model is forced by SAFRAN meteorological reanalysis data, adjusted with automatic weather station (AWS) measurements to ensure that simulations of all the energy balance components, in particular turbulent fluxes, are accurately represented with respect to the measured energy balance. Results indicate good model performance for the simulation of summer SMB when using meteorological forcing adjusted with in situ measurements. Model performance however strongly decreases without in situ meteorological measurements. The sensitivity of the model to meteorological forcing indicates a strong sensitivity to wind speed, higher than the sensitivity to ice albedo. Compared to an empirical approach, the model exhibited better performance for simulations of snow and firn melting in the accumulation area and similar performance in the ablation area when forced with meteorological data adjusted with nearby AWS measurements. When such measurements were not available close to the glacier, the empirical model performed better. Our results suggest that simulations of the evolution of future mass balance using an energy balance model require very accurate meteorological data. Given the uncertainties in the temporal evolution of the relevant meteorological variables and glacier surface properties in the future, empirical approaches based on temperature and precipitation could be more appropriate for simulations of glaciers in the future.

  12. Enhancing the Simulation Speed of Sensor Network Applications by Asynchronization of Interrupt Service Routines

    PubMed Central

    Joe, Hyunwoo; Woo, Duk-Kyun; Kim, Hyungshin

    2013-01-01

    Sensor network simulations require high fidelity and timing accuracy to be used as an implementation and evaluation tool. The cycle-accurate and instruction-level simulator is the known solution for these purposes. However, this type of simulation incurs a high computation cost since it has to model not only the instruction level behavior but also the synchronization between multiple sensors for their causality. This paper presents a novel technique that exploits asynchronous simulations of interrupt service routines (ISR). We can avoid the synchronization overheads when the interrupt service routines are simulated without preemption. If the causality errors occur, we devise a rollback procedure to restore the original synchronized simulation. This concept can be extended to any instruction-level sensor network simulator. Evaluation results show our method can enhance the simulation speed up to 52% in the case of our experiments. For applications with longer interrupt service routines and smaller number of preemptions, the speedup becomes greater. In addition, our simulator is 2 to 11 times faster than the well-known sensor network simulator. PMID:23966200

  13. Test of Shi et al. Method to Infer the Magnetic Reconnection Geometry from Spacecraft Data: MHD Simulation with Guide Field and Antiparallel Kinetic Simulation

    NASA Technical Reports Server (NTRS)

    Denton, R.; Sonnerup, B. U. O.; Swisdak, M.; Birn, J.; Drake, J. F.; Heese, M.

    2012-01-01

    When analyzing data from an array of spacecraft (such as Cluster or MMS) crossing a site of magnetic reconnection, it is desirable to be able to accurately determine the orientation of the reconnection site. If the reconnection is quasi-two dimensional, there are three key directions, the direction of maximum inhomogeneity (the direction across the reconnection site), the direction of the reconnecting component of the magnetic field, and the direction of rough invariance (the "out of plane" direction). Using simulated spacecraft observations of magnetic reconnection in the geomagnetic tail, we extend our previous tests of the direction-finding method developed by Shi et al. (2005) and the method to determine the structure velocity relative to the spacecraft Vstr. These methods require data from four proximate spacecraft. We add artificial noise and calibration errors to the simulation fields, and then use the perturbed gradient of the magnetic field B and perturbed time derivative dB/dt, as described by Denton et al. (2010). Three new simulations are examined: a weakly three-dimensional, i.e., quasi-two-dimensional, MHD simulation without a guide field, a quasi-two-dimensional MHD simulation with a guide field, and a two-dimensional full dynamics kinetic simulation with inherent noise so that the apparent minimum gradient was not exactly zero, even without added artificial errors. We also examined variations of the spacecraft trajectory for the kinetic simulation. The accuracy of the directions found varied depending on the simulation and spacecraft trajectory, but all the directions could be found within about 10 for all cases. Various aspects of the method were examined, including how to choose averaging intervals and the best intervals for determining the directions and velocity. For the kinetic simulation, we also investigated in detail how the errors in the inferred gradient directions from the unmodified Shi et al. method (using the unperturbed gradient) depended on the amplitude of the calibration errors. For an accuracy of 3 for the maximum gradient direction, the calibration errors could be as large as 3% of reconnection magnetic field, while for the same accuracy for the minimum gradient direction, the calibration errors could only be as large as 0.03% of the reconnection magnetic field. These results suggest that the maximum gradient direction can normally be determined by the unmodified Shi et al. method, while the modified method or some other method must be used to accurately determine the minimum gradient direction. The structure velocity was found with magnitude accurate to 2% and direction accurate to within 5%.

  14. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  16. A new method of field MRTD test

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Song, Yan; Liu, Xianhong; Xiao, Wenjian

    2014-09-01

    MRTD is an important indicator to measure the imaging performance of infrared camera. In the traditional laboratory test, blackbody is used as simulated heat source which is not only expensive and bulky but also difficult to meet field testing requirements of online automatic infrared camera MRTD. To solve this problem, this paper introduces a new detection device for MRTD, which uses LED as a simulation heat source and branded plated zinc sulfide glass carved four-bar target as a simulation target. By using high temperature adaptability cassegrain collimation system, the target is simulated to be distance-infinite so that it can be observed by the human eyes to complete the subjective test, or collected to complete objective measurement by image processing. This method will use LED to replace blackbody. The color temperature of LED is calibrated by thermal imager, thereby, the relation curve between the LED temperature controlling current and the blackbody simulation temperature difference is established, accurately achieved the temperature control of the infrared target. Experimental results show that the accuracy of the device in field testing of thermal imager MRTD can be limited within 0.1K, which greatly reduces the cost to meet the project requirements with a wide application value.

  17. Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2016-09-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.

  18. Simulation of FIB-SEM images for analysis of porous microstructures.

    PubMed

    Prill, Torben; Schladitz, Katja

    2013-01-01

    Focused ion beam nanotomography-scanning electron microscopy tomography yields high-quality three-dimensional images of materials microstructures at the nanometer scale combining serial sectioning using a focused ion beam with SEM. However, FIB-SEM tomography of highly porous media leads to shine-through artifacts preventing automatic segmentation of the solid component. We simulate the SEM process in order to generate synthetic FIB-SEM image data for developing and validating segmentation methods. Monte-Carlo techniques yield accurate results, but are too slow for the simulation of FIB-SEM tomography requiring hundreds of SEM images for one dataset alone. Nevertheless, a quasi-analytic description of the specimen and various acceleration techniques, including a track compression algorithm and an acceleration for the simulation of secondary electrons, cut down the computing time by orders of magnitude, allowing for the first time to simulate FIB-SEM tomography. © Wiley Periodicals, Inc.

  19. Real-Time Hardware-in-the-Loop Simulation of Ares I Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Tobbe, Patrick; Matras, Alex; Walker, David; Wilson, Heath; Fulton, Chris; Alday, Nathan; Betts, Kevin; Hughes, Ryan; Turbe, Michael

    2009-01-01

    The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory at the Marshall Space Flight Center. The primary purpose of the Ares System Integration Laboratory is to test the vehicle avionics hardware and software in a hardware - in-the-loop environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time simulation backbone to stimulate all required Ares components for verification testing. ARTE_VIIS provides high -fidelity dynamics, actuator, and sensor models to simulate an accurate flight trajectory in order to ensure realistic test conditions. ARTEMIS has been designed to take advantage of the advances in underlying computational power now available to support hardware-in-the-loop testing to achieve real-time simulation with unprecedented model fidelity. A modular realtime design relying on a fully distributed computing architecture has been implemented.

  20. Simulation of hypersonic scramjet exhaust. [pressure distribution on afterbody/nozzle sections of vehicle

    NASA Technical Reports Server (NTRS)

    Oman, R. A.; Foreman, K. M.; Leng, J.; Hopkins, H. B.

    1975-01-01

    A plan and some preliminary analysis for the accurate simulation of pressure distributions on the afterbody/nozzle portions of a hypersonic scramjet vehicle are described. The objectives fulfilled were to establish the standards of similitude for a hydrogen/air scramjet exhaust interacting with a vehicle afterbody, determine an experimental technique for validation of the procedures that will be used in conventional wind tunnel facilities, suggest a program of experiments for proof of the concept, and explore any unresolved problems in the proposed simulation procedures. It is shown that true enthalpy, Reynolds number, and nearly exact chemistry can be provided in the exhaust flow for the flight regime from Mach 4 to 10 by a detonation tube simulation. A detailed discussion of the required similarity parameters leads to the conclusion that substitute gases can be used as the simulated exhaust gas in a wind tunnel to achieve the correct interaction forces and moments.

  1. Control of joint motion simulators for biomechanical research

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.

    1992-01-01

    The authors present a hierarchical adaptive algorithm for controlling upper extremity human joint motion simulators. A joint motion simulator is a computer-controlled, electromechanical system which permits the application of forces to the tendons of a human cadaver specimen in such a way that the cadaver joint under study achieves a desired motion in a physiologic manner. The proposed control scheme does not require knowledge of the cadaver specimen dynamic model, and solves on-line the indeterminate problem which arises because human joints typically possess more actuators than degrees of freedom. Computer simulation results are given for an elbow/forearm system and wrist/hand system under hierarchical control. The results demonstrate that any desired normal joint motion can be accurately tracked with the proposed algorithm. These simulation results indicate that the controller resolved the indeterminate problem redundancy in a physiologic manner, and show that the control scheme was robust to parameter uncertainty and to sensor noise.

  2. Evaluating the performance of coupled snow-soil models in SURFEXv8 to simulate the permafrost thermal regime at a high Arctic site

    NASA Astrophysics Data System (ADS)

    Barrere, Mathieu; Domine, Florent; Decharme, Bertrand; Morin, Samuel; Vionnet, Vincent; Lafaysse, Matthieu

    2017-09-01

    Climate change projections still suffer from a limited representation of the permafrost-carbon feedback. Predicting the response of permafrost temperature to climate change requires accurate simulations of Arctic snow and soil properties. This study assesses the capacity of the coupled land surface and snow models ISBA-Crocus and ISBA-ES to simulate snow and soil properties at Bylot Island, a high Arctic site. Field measurements complemented with ERA-Interim reanalyses were used to drive the models and to evaluate simulation outputs. Snow height, density, temperature, thermal conductivity and thermal insulance are examined to determine the critical variables involved in the soil and snow thermal regime. Simulated soil properties are compared to measurements of thermal conductivity, temperature and water content. The simulated snow density profiles are unrealistic, which is most likely caused by the lack of representation in snow models of the upward water vapor fluxes generated by the strong temperature gradients within the snowpack. The resulting vertical profiles of thermal conductivity are inverted compared to observations, with high simulated values at the bottom of the snowpack. Still, ISBA-Crocus manages to successfully simulate the soil temperature in winter. Results are satisfactory in summer, but the temperature of the top soil could be better reproduced by adequately representing surface organic layers, i.e., mosses and litter, and in particular their water retention capacity. Transition periods (soil freezing and thawing) are the least well reproduced because the high basal snow thermal conductivity induces an excessively rapid heat transfer between the soil and the snow in simulations. Hence, global climate models should carefully consider Arctic snow thermal properties, and especially the thermal conductivity of the basal snow layer, to perform accurate predictions of the permafrost evolution under climate change.

  3. Simulation of reactive polydisperse sprays strongly coupled to unsteady flows in solid rocket motors: Efficient strategy using Eulerian Multi-Fluid methods

    NASA Astrophysics Data System (ADS)

    Sibra, A.; Dupays, J.; Murrone, A.; Laurent, F.; Massot, M.

    2017-06-01

    In this paper, we tackle the issue of the accurate simulation of evaporating and reactive polydisperse sprays strongly coupled to unsteady gaseous flows. In solid propulsion, aluminum particles are included in the propellant to improve the global performances but the distributed combustion of these droplets in the chamber is suspected to be a driving mechanism of hydrodynamic and acoustic instabilities. The faithful prediction of two-phase interactions is a determining step for future solid rocket motor optimization. When looking at saving computational ressources as required for industrial applications, performing reliable simulations of two-phase flow instabilities appears as a challenge for both modeling and scientific computing. The size polydispersity, which conditions the droplet dynamics, is a key parameter that has to be accounted for. For moderately dense sprays, a kinetic approach based on a statistical point of view is particularly appropriate. The spray is described by a number density function and its evolution follows a Williams-Boltzmann transport equation. To solve it, we use Eulerian Multi-Fluid methods, based on a continuous discretization of the size phase space into sections, which offer an accurate treatment of the polydispersion. The objective of this paper is threefold: first to derive a new Two Size Moment Multi-Fluid model that is able to tackle evaporating polydisperse sprays at low cost while accurately describing the main driving mechanisms, second to develop a dedicated evaporation scheme to treat simultaneously mass, moment and energy exchanges with the gas and between the sections. Finally, to design a time splitting operator strategy respecting both reactive two-phase flow physics and cost/accuracy ratio required for industrial computations. Using a research code, we provide 0D validations of the new scheme before assessing the splitting technique's ability on a reference two-phase flow acoustic case. Implemented in the industrial-oriented CEDRE code, all developments allow to simulate realistic solid rocket motor configurations featuring the first polydisperse reactive computations with a fully Eulerian method.

  4. Progress of the NASAUSGS Lunar Regolith Simulant Project

    NASA Technical Reports Server (NTRS)

    Rickman, Douglas; McLemore, C.; Stoeser, D.; Schrader, C.; Fikes, J.; Street, K.

    2009-01-01

    Beginning in 2004 personnel at MSFC began serious efforts to develop a new generation of lunar simulants. The first two products were a replication of the previous JSC-1 simulant under a contract to Orbitec and a major workshop in 2005 on future simulant development. It was recognized in early 2006 there were serious limitations with the standard approach of simply taking a single terrestrial rock and grinding it. To a geologist, even a cursory examination of the Lunar Sourcebook shows that matching lunar heterogeneity, crystal size, relative mineral abundances, lack of H2O, plagioclase chemistry and glass abundance simply can not be done with any simple combination of terrestrial rocks. Thus the project refocused its efforts and approached simulant development in a new and more comprehensive manner, examining new approaches in simulant development and ways to more accurately compare simulants to actual lunar materials. This led to a multi-year effort with five major tasks running in parallel. The five tasks are Requirements, Lunar Analysis, Process Development, Feed Stocks, and Standards.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.; Hamilton, Steven P.; Jarrett, Michael G.

    This report describes the performance improvements made to the VERA Core Simulator (VERA-CS) during FY2016. The development of the VERA Core Simulator has focused on the capability needed to deplete physical reactors and help solve various problems; this capability required the accurate simulation of many operating cycles of a nuclear power plant. The first section of this report introduces two test problems used to assess the run-time performance of VERA-CS using a source dated February 2016. The next section provides a brief overview of the major modifications made to decrease the computational cost. Following the descriptions of the major improvements,more » the run-time for each improvement is shown. Conclusions on the work are presented, and further follow-on performance improvements are suggested.« less

  6. End-to-End Beam Simulations for the New Muon G-2 Experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korostelev, Maxim; Bailey, Ian; Herrod, Alexander

    2016-06-01

    The aim of the new muon g-2 experiment at Fermilab is to measure the anomalous magnetic moment of the muon with an unprecedented uncertainty of 140 ppb. A beam of positive muons required for the experiment is created by pion decay. Detailed studies of the beam dynamics and spin polarization of the muons are important to predict systematic uncertainties in the experiment. In this paper, we present the results of beam simulations and spin tracking from the pion production target to the muon storage ring. The end-to-end beam simulations are developed in Bmad and include the processes of particle decay,more » collimation (with accurate representation of all apertures) and spin tracking.« less

  7. Accuracy of Binary Black Hole Waveform Models for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team

    2016-03-01

    Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.

  8. Transient Vibration Prediction for Rotors on Ball Bearings Using Load-dependent Non-linear Bearing Stiffness

    NASA Technical Reports Server (NTRS)

    Fleming, David P.; Poplawski, J. V.

    2002-01-01

    Rolling-element bearing forces vary nonlinearly with bearing deflection. Thus an accurate rotordynamic transient analysis requires bearing forces to be determined at each step of the transient solution. Analyses have been carried out to show the effect of accurate bearing transient forces (accounting for non-linear speed and load dependent bearing stiffness) as compared to conventional use of average rolling-element bearing stiffness. Bearing forces were calculated by COBRA-AHS (Computer Optimized Ball and Roller Bearing Analysis - Advanced High Speed) and supplied to the rotordynamics code ARDS (Analysis of Rotor Dynamic Systems) for accurate simulation of rotor transient behavior. COBRA-AHS is a fast-running 5 degree-of-freedom computer code able to calculate high speed rolling-element bearing load-displacement data for radial and angular contact ball bearings and also for cylindrical and tapered roller beatings. Results show that use of nonlinear bearing characteristics is essential for accurate prediction of rotordynamic behavior.

  9. High Order Schemes in Bats-R-US for Faster and More Accurate Predictions

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Toth, G.; Gombosi, T. I.

    2014-12-01

    BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.

  10. Multispectral optical telescope alignment testing for a cryogenic space environment

    NASA Astrophysics Data System (ADS)

    Newswander, Trent; Hooser, Preston; Champagne, James

    2016-09-01

    Multispectral space telescopes with visible to long wave infrared spectral bands provide difficult alignment challenges. The visible channels require precision in alignment and stability to provide good image quality in short wavelengths. This is most often accomplished by choosing materials with near zero thermal expansion glass or ceramic mirrors metered with carbon fiber reinforced polymer (CFRP) that are designed to have a matching thermal expansion. The IR channels are less sensitive to alignment but they often require cryogenic cooling for improved sensitivity with the reduced radiometric background. Finding efficient solutions to this difficult problem of maintaining good visible image quality at cryogenic temperatures has been explored with the building and testing of a telescope simulator. The telescope simulator is an onaxis ZERODUR® mirror, CFRP metered set of optics. Testing has been completed to accurately measure telescope optical element alignment and mirror figure changes in a cryogenic space simulated environment. Measured alignment error and mirror figure error test results are reported with a discussion of their impact on system optical performance.

  11. Numerical Simulation of The Mediterranean Sea Using Diecast: Interaction Between Basin, Sub-basin and Local Scale Features and Natural Variability.

    NASA Astrophysics Data System (ADS)

    Fernández, V.; Dietrich, D. E.; Haney, R. L.; Tintoré, J.

    In situ and satellite data obtained during the last ten years have shown that the circula- tion in the Mediterranean Sea is extremely complex in space, with significant features ranging from mesoscale to sub-basin and basin scale, and highly variable in time, with mesoscale to seasonal and interannual signals. Also, the steep bottom topography and the variable atmospheric conditions from one sub-basin to another, make the circula- tion to be composed of numerous energetic and narrow coastal currents, density fronts and mesoscale structures that interact at sub-basin scale with the large scale circula- tion. To simulate numerically and better understand these features, besides high grid resolution, a low numerical dispersion and low physical dissipation ocean model is required. We present the results from a 1/8z horizontal resolution numerical simula- tion of the Mediterranean Sea using DieCAST ocean model, which meets the above requirements since it is stable with low general dissipation and uses accurate fourth- order-accurate approximations with low numerical dispersion. The simulations are carried out with climatological surface forcing using monthly mean winds and relax- ation towards climatological values of temperature and salinity. The model reproduces the main features of the large basin scale circulation, as well as the seasonal variabil- ity of sub-basin scale currents that are well documented by observations in straits and channels. In addition, DieCAST brings out natural fronts and eddies that usually do not appear in numerical simulations of the Mediterranean and that lead to a natural interannual variability. The role of this intrinsic variability in the general circulation will be discussed.

  12. Numerical simulation and characterization of trapping noise in InGaP-GaAs heterojunctions devices at high injection

    NASA Astrophysics Data System (ADS)

    Nallatamby, Jean-Christophe; Abdelhadi, Khaled; Jacquet, Jean-Claude; Prigent, Michel; Floriot, Didier; Delage, Sylvain; Obregon, Juan

    2013-03-01

    Commercially available simulators present considerable advantages in performing accurate DC, AC and transient simulations of semiconductor devices, including many fundamental and parasitic effects which are not generally taken into account in house-made simulators. Nevertheless, while the TCAD simulators of the public domain we have tested give accurate results for the simulation of diffusion noise, none of the tested simulators perform trap-assisted GR noise accurately. In order to overcome the aforementioned problem we propose a robust solution to accurately simulate GR noise due to traps. It is based on numerical processing of the output data of one of the simulators available in the public-domain, namely SENTAURUS (from Synopsys). We have linked together, through a dedicated Data Access Component (DAC), the deterministic output data available from SENTAURUS and a powerful, customizable post-processing tool developed on the mathematical SCILAB software package. Thus, robust simulations of GR noise in semiconductor devices can be performed by using GR Langevin sources associated to the scalar Green functions responses of the device. Our method takes advantage of the accuracy of the deterministic simulations of electronic devices obtained with SENTAURUS. A Comparison between 2-D simulations and measurements of low frequency noise on InGaP-GaAs heterojunctions, at low as well as high injection levels, demonstrates the validity of the proposed simulation tool.

  13. Spatial Resolution Requirements for Accurate Identification of Drivers of Atrial Fibrillation

    PubMed Central

    Roney, Caroline H.; Cantwell, Chris D.; Bayer, Jason D.; Qureshi, Norman A.; Lim, Phang Boon; Tweedy, Jennifer H.; Kanagaratnam, Prapa; Vigmond, Edward J.; Ng, Fu Siong

    2017-01-01

    Background— Recent studies have demonstrated conflicting mechanisms underlying atrial fibrillation (AF), with the spatial resolution of data often cited as a potential reason for the disagreement. The purpose of this study was to investigate whether the variation in spatial resolution of mapping may lead to misinterpretation of the underlying mechanism in persistent AF. Methods and Results— Simulations of rotors and focal sources were performed to estimate the minimum number of recording points required to correctly identify the underlying AF mechanism. The effects of different data types (action potentials and unipolar or bipolar electrograms) and rotor stability on resolution requirements were investigated. We also determined the ability of clinically used endocardial catheters to identify AF mechanisms using clinically recorded and simulated data. The spatial resolution required for correct identification of rotors and focal sources is a linear function of spatial wavelength (the distance between wavefronts) of the arrhythmia. Rotor localization errors are larger for electrogram data than for action potential data. Stationary rotors are more reliably identified compared with meandering trajectories, for any given spatial resolution. All clinical high-resolution multipolar catheters are of sufficient resolution to accurately detect and track rotors when placed over the rotor core although the low-resolution basket catheter is prone to false detections and may incorrectly identify rotors that are not present. Conclusions— The spatial resolution of AF data can significantly affect the interpretation of the underlying AF mechanism. Therefore, the interpretation of human AF data must be taken in the context of the spatial resolution of the recordings. PMID:28500175

  14. Calculation of steady and unsteady transonic flow using a Cartesian mesh and gridless boundary conditions with application to aeroelasticity

    NASA Astrophysics Data System (ADS)

    Kirshman, David

    A numerical method for the solution of inviscid compressible flow using an array of embedded Cartesian meshes in conjunction with gridless surface boundary conditions is developed. The gridless boundary treatment is implemented by means of a least squares fitting of the conserved flux variables using a cloud of nodes in the vicinity of the surface geometry. The method allows for accurate treatment of the surface boundary conditions using a grid resolution an order of magnitude coarser than required of typical Cartesian approaches. Additionally, the method does not suffer from issues associated with thin body geometry or extremely fine cut cells near the body. Unlike some methods that consider a gridless (or "meshless") treatment throughout the entire domain, multi-grid acceleration can be effectively incorporated and issues associated with global conservation are alleviated. The "gridless" surface boundary condition provides for efficient and simple problem set up since definition of the body geometry is generated independently from the field mesh, and automatically incorporated into the field discretization of the domain. The applicability of the method is first demonstrated for steady flow of single and multi-element airfoil configurations. Using this method, comparisons with traditional body-fitted grid simulations reveal that steady flow solutions can be obtained accurately with minimal effort associated with grid generation. The method is then extended to unsteady flow predictions. In this application, flow field simulations for the prescribed oscillation of an airfoil indicate excellent agreement with experimental data. Furthermore, it is shown that the phase lag associated with shock oscillation is accurately predicted without the need for a deformable mesh. Lastly, the method is applied to the prediction of transonic flutter using a two-dimensional wing model, in which comparisons with moving mesh simulations yield nearly identical results. As a result, applicability of the method to transient and vibrating fluid-structure interaction problems is established in which the requirement for a deformable mesh is eliminated.

  15. Predicting the crystalline and porous equations of state for secondary explosives

    NASA Astrophysics Data System (ADS)

    Wixom, Ryan; Damm, David

    2013-06-01

    Accurate simulations of energetic material response necessitate accurate unreacted equations of state at pressures much higher than even the C-J state. Unfortunately, for reactive materials, experimental data at high pressures may be unattainable, and extrapolation from low-pressure data results in unacceptable uncertainty. In addition to being low-pressure, the available data is typically limited to the porous state. The fully-dense, or crystalline, equation of state is required for building mesoscale simulations of the dynamic response of energetic materials. We have used quantum molecular dynamics to predict the Hugoniots and 300 K isotherms of crystalline PETN, HNS, CL-20 and TATB up to pressures not attainable in experiments. The porous Hugoniots for these materials were then analytically obtained and are validated by comparison with available data. Our calculations for TATB confirm the presence of a kink in the Hugoniot, and the softening of the shock response is explained in terms of a change in molecular conformation and the loss of aromaticity.

  16. Seasonal Parameterizations of the Tau-Omega Model Using the ComRAD Ground-Based SMAP Simulator

    NASA Technical Reports Server (NTRS)

    O'Neill, P.; Joseph, A.; Srivastava, P.; Cosh, M.; Lang, R.

    2014-01-01

    NASA's Soil Moisture Active Passive (SMAP) mission is scheduled for launch in November 2014. In the prelaunch time frame, the SMAP team has focused on improving retrieval algorithms for the various SMAP baseline data products. The SMAP passive-only soil moisture product depends on accurate parameterization of the tau-omega model to achieve the required accuracy in soil moisture retrieval. During a field experiment (APEX12) conducted in the summer of 2012 under dry conditions in Maryland, the Combined Radar/Radiometer (ComRAD) truck-based SMAP simulator collected active/passive microwave time series data at the SMAP incident angle of 40 degrees over corn and soybeans throughout the crop growth cycle. A similar experiment was conducted only over corn in 2002 under normal moist conditions. Data from these two experiments will be analyzed and compared to evaluate how changes in vegetation conditions throughout the growing season in both a drought and normal year can affect parameterizations in the tau-omega model for more accurate soil moisture retrieval.

  17. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    NASA Astrophysics Data System (ADS)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  18. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  19. Orion Ground Test Article Water Impact Tests: Photogrammetric Evaluation of Impact Conditions

    NASA Technical Reports Server (NTRS)

    Vassilakos, Gregory J.; Mark, Stephen D.

    2018-01-01

    The Ground Test Article (GTA) is an early production version of the Orion Crew Module (CM). The structural design of the Orion CM is being developed based on LS-DYNA water landing simulations. As part of the process of confirming the accuracy of LS-DYNA water landing simulations, the GTA water impact test series was conducted at NASA Langley Research Center (LaRC) to gather data for comparison with simulations. The simulation of the GTA water impact tests requires the accurate determination of the impact conditions. To accomplish this, the GTA was outfitted with an array of photogrammetry targets. The photogrammetry system utilizes images from two cameras with a specialized tracking software to determine time histories for the 3-D coordinates of each target. The impact conditions can then be determined from the target location data.

  20. New high order schemes in BATS-R-US

    NASA Astrophysics Data System (ADS)

    Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.

    2013-12-01

    The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.

  1. Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben

    2003-01-01

    A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.

  2. A Comparison of Forest Survey Data with Forest Dynamics Simulators FORCLIM and ZELIG along Climatic Gradients in the Pacific Northwest

    USGS Publications Warehouse

    Busing, Richard T.; Solomon, Allen M.

    2004-01-01

    Two forest dynamics simulators are compared along climatic gradients in the Pacific Northwest. The ZELIG and FORCLIM models are tested against forest survey data from western Oregon. Their ability to generate accurate patterns of forest basal area and species composition is evaluated for series of sites with contrasting climate. Projections from both models approximate the basal area and composition patterns for three sites along the elevation gradient at H.J. Andrews Experimental Forest in the western Cascade Range. The ZELIG model is somewhat more accurate than FORCLIM at the two low-elevation sites. Attempts to project forest composition along broader climatic gradients reveal limitations of ZELIG, however. For example, ZELIG is less accurate than FORCLIM at projecting the average composition of a west Cascades ecoregion selected for intensive analysis. Also, along a gradient consisting of several sites on an east to west transect at 44.1oN latitude, both the FORCLIM model and the actual data show strong changes in composition and total basal area, but the ZELIG model shows a limited response. ZELIG does not simulate the declines in forest basal area and the diminished dominance of mesic coniferous species east of the Cascade crest. We conclude that ZELIG is suitable for analyses of certain sites for which it has been calibrated. FORCLIM can be applied in analyses involving a range of climatic conditions without requiring calibration for specific sites.

  3. Interstitial distribution of charged macromolecules in the dog lung: a kinetic model.

    PubMed

    Parker, J C; Miniati, M; Pitt, R; Taylor, A E

    1987-01-01

    A mathematic model was constructed to investigate conflicting physiologic data concerning the charge effect of continuous capillaries to macromolecules in the lung. We simulated the equilibration kinetics of lactate dehydrogenase (MR 4.2 nM) isozymes LDH 1 (pI = 5.0) and LDH 5 (pI = 7.9) between plasma and lymph using previously measured permeability coefficients, lung tissue distribution volumes (VA) and plasma concentrations (CP) in lung tissue. Our hypothesis is that the fixed anionic charges in interstitium, basement membrane, and cell surfaces determine equilibration rather than charged membrane effects at the capillary barrier, so the same capillary permeability coefficients were used for both isozymes. Capillary filtration rates and protein fluxes were calculated using conventional flux equations. Initial conditions at baseline and increased left atrial pressures (PLA) were those measured in animal studies. Simulated equilibration of isozymes over 30 h in the model at baseline capillary pressures accurately predicted the observed differences in lymph/plasma concentration ratios (CL/CP) between isotopes at 4 h and equilibration of these ratios at 24 h. Quantitative prediction of isozyme CL/CP ratios was also obtained at increased PLA. However, an additional cation selective compartment representing the surface glycocalyx was required to accurately simulate the initial higher transcapillary clearances of cationic LDH 5. Thus experimental data supporting the negative barrier, positive barrier, and no charge barrier hypotheses were accurately reproduced by the model using only the observed differences in interstitial partitioning of isozymes without differences in capillary selectivity.

  4. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    NASA Technical Reports Server (NTRS)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  5. Evaluating the sensitivity of agricultural model performance to different climate inputs

    PubMed Central

    Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.

    2017-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985

  6. Single-phase power distribution system power flow and fault analysis

    NASA Technical Reports Server (NTRS)

    Halpin, S. M.; Grigsby, L. L.

    1992-01-01

    Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.

  7. PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences

    PubMed Central

    Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong

    2015-01-01

    Abstract We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate—slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory. PMID:25549288

  8. Pulse Width Modulator Controller Design for a Brushless DC Motor Position Servo.

    DTIC Science & Technology

    1987-06-01

    C. POWER CONDITIONER SIMULATION Accurate modeling of power conditioning and commutation in brushless dc motors requires explicit definition of the...Study of a Brushless DC Motor Power Conditioner for a Cruise Missile Fin Control Actuator, Master’s Thesis, Naval Postgraduate School, Monterey, Ca...DESIGN FOR A BRUSHLESS DC MOTOR POSITION SERVO by Vincent S. Rossitto June 1987 Thesis Advisor: Alex Gerba, Jr. Approved for public release

  9. Modeling and scaleup of steamflood in a heterogeneous reservoir

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dehghani, K.; Basham, W.M.; Durlofsky, L.J.

    1995-11-01

    A series of simulation runs was conducted for different geostatistically derived cross-sectional models to study the degree of heterogeneity required for proper modeling of steamfloods in a thick, heavy-oil reservoir with thin diatomite barriers Different methods for coarsening the most detailed models were applied, and performance predictions for the coarsened and detailed models compared. Use of a general scaleup method provided the most accurate coarse grid models.

  10. Integrals for IBS and beam cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.; /Fermilab

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  11. Integrals for IBS and Beam Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  12. Temporal binning of time-correlated single photon counting data improves exponential decay fits and imaging speed

    PubMed Central

    Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.

    2016-01-01

    Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663

  13. A crystal plasticity model for slip in hexagonal close packed metals based on discrete dislocation simulations

    NASA Astrophysics Data System (ADS)

    Messner, Mark C.; Rhee, Moono; Arsenlis, Athanasios; Barton, Nathan R.

    2017-06-01

    This work develops a method for calibrating a crystal plasticity model to the results of discrete dislocation (DD) simulations. The crystal model explicitly represents junction formation and annihilation mechanisms and applies these mechanisms to describe hardening in hexagonal close packed metals. The model treats these dislocation mechanisms separately from elastic interactions among populations of dislocations, which the model represents through a conventional strength-interaction matrix. This split between elastic interactions and junction formation mechanisms more accurately reproduces the DD data and results in a multi-scale model that better represents the lower scale physics. The fitting procedure employs concepts of machine learning—feature selection by regularized regression and cross-validation—to develop a robust, physically accurate crystal model. The work also presents a method for ensuring the final, calibrated crystal model respects the physical symmetries of the crystal system. Calibrating the crystal model requires fitting two linear operators: one describing elastic dislocation interactions and another describing junction formation and annihilation dislocation reactions. The structure of these operators in the final, calibrated model reflect the crystal symmetry and slip system geometry of the DD simulations.

  14. Incorporating contact angles in the surface tension force with the ACES interface curvature scheme

    NASA Astrophysics Data System (ADS)

    Owkes, Mark

    2017-11-01

    In simulations of gas-liquid flows interacting with solid boundaries, the contact line dynamics effect the interface motion and flow field through the surface tension force. The surface tension force is directly proportional to the interface curvature and the problem of accurately imposing a contact angle must be incorporated into the interface curvature calculation. Many commonly used algorithms to compute interface curvatures (e.g., height function method) require extrapolating the interface, with defined contact angle, into the solid to allow for the calculation of a curvature near a wall. Extrapolating can be an ill-posed problem, especially in three-dimensions or when multiple contact lines are near each other. We have developed an accurate methodology to compute interface curvatures that allows for contact angles to be easily incorporated while avoiding extrapolation and the associated challenges. The method, known as Adjustable Curvature Evaluation Scale (ACES), leverages a least squares fit of a polynomial to points computed on the volume-of-fluid (VOF) representation of the gas-liquid interface. The method is tested by simulating canonical test cases and then applied to simulate the injection and motion of water droplets in a channel (relevant to PEM fuel cells).

  15. An effective absorbing layer for the boundary condition in acoustic seismic wave simulation

    NASA Astrophysics Data System (ADS)

    Yao, Gang; da Silva, Nuno V.; Wu, Di

    2018-04-01

    Efficient numerical simulation of seismic wavefields generally involves truncating the Earth model in order to keep computing time and memory requirements down. Absorbing boundary conditions, therefore, are applied to remove the boundary reflections caused by this truncation, thereby allowing for accurate modeling of wavefields. In this paper, we derive an effective absorbing boundary condition for both acoustic and elastic wave simulation, through the simplification of the damping term of the split perfectly matched layer (SPML) boundary condition. This new boundary condition is accurate, cost-effective, and easily implemented, especially for high-performance computing. Stability analysis shows that this boundary condition is effectively as stable as normal (non-absorbing) wave equations for explicit time-stepping finite differences. We found that for full-waveform inversion (FWI), the strengths of the effective absorbing layer—a reduction of the computational and memory cost coupled with a simplistic implementation—significantly outweighs the limitation of incomplete absorption of outgoing waves relative to the SPML. More importantly, we demonstrate that this limitation can easily be overcome through the use of two strategies in FWI, namely variable cell size and model extension thereby fully compensating for the imperfectness of the proposed absorbing boundary condition.

  16. Application of the Ecosystem Assessment Model to Lake Norman: A cooling lake in North Carolina: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porcella, D.B.; Bowie, G.L.; Campbell, C.L.

    The Ecosystem Assessment Model (EAM) of the Cooling Lake Assessment Methodology was applied to the extensive ecological field data collected at Lake Norman, North Carolina by Duke Power Company to evaluate its capability to simulate lake ecosystems and the ecological effects of steam electric power plants. The EAM provided simulations over a five-year verification period that behaved as expected based on a one-year calibration. Major state variables of interest to utilities and regulatory agencies are: temperature, dissolved oxygen, and fish community variables. In qualitative terms, temperature simulation was very accurate, dissolved oxygen simulation was accurate, and fish prediction was reasonablymore » accurate. The need for more accurate fisheries data collected at monthly intervals and non-destructive sampling techniques was identified.« less

  17. A mechanistic diagnosis of the simulation of soil CO2 efflux of the ACME Land Model

    NASA Astrophysics Data System (ADS)

    Liang, J.; Ricciuto, D. M.; Wang, G.; Gu, L.; Hanson, P. J.; Mayes, M. A.

    2017-12-01

    Accurate simulation of the CO2 efflux from soils (i.e., soil respiration) to the atmosphere is critical to project global biogeochemical cycles and the magnitude of climate change in Earth system models (ESMs). Currently, the simulated soil respiration by ESMs still have a large uncertainty. In this study, a mechanistic diagnosis of soil respiration in the Accelerated Climate Model for Energy (ACME) Land Model (ALM) was conducted using long-term observations at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the central U.S. The results showed that the ALM default run significantly underestimated annual soil respiration and gross primary production (GPP), while incorrectly estimating soil water potential. Improved simulations of soil water potential with site-specific data significantly improved the modeled annual soil respiration, primarily because annual GPP was simultaneously improved. Therefore, accurate simulations of soil water potential must be carefully calibrated in ESMs. Despite improved annual soil respiration, the ALM continued to underestimate soil respiration during peak growing seasons, and to overestimate soil respiration during non-peak growing seasons. Simulations involving increased GPP during peak growing seasons increased soil respiration, while neither improved plant phenology nor increased temperature sensitivity affected the simulation of soil respiration during non-peak growing seasons. One potential reason for the overestimation of the soil respiration during non-peak growing seasons may be that the current model structure is substrate-limited, while microbial dormancy under stress may cause the system to become decomposer-limited. Further studies with more microbial data are required to provide adequate representation of soil respiration and to understand the underlying reasons for inaccurate model simulations.

  18. Three-dimensional simulations of low foot and high foot implosion experiments on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, D. S.; Weber, C. R.; Milovich, J. L.

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) [E. I. Moses et al., Phys. Plasmas 16, 041006 (2009)] require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensionalmore » (3D) character of the flow, accurately modeling NIF implosions remains at the edge of current simulation capabilities. This study describes the current state of progress of 3D capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. Finally, for both implosion types, the simulations show reasonable, though not perfect, agreement with the data and suggest that a reliable predictive capability is developing to guide future implosions toward ignition.« less

  19. Three-dimensional simulations of low foot and high foot implosion experiments on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, D. S.; Weber, C. R.; Milovich, J. L.

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) [E. I. Moses et al., Phys. Plasmas 16, 041006 (2009)] require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensionalmore » (3D) character of the flow, accurately modeling NIF implosions remains at the edge of current simulation capabilities. This paper describes the current state of progress of 3D capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. For both implosion types, the simulations show reasonable, though not perfect, agreement with the data and suggest that a reliable predictive capability is developing to guide future implosions toward ignition.« less

  20. Three-dimensional simulations of low foot and high foot implosion experiments on the National Ignition Facility

    DOE PAGES

    Clark, D. S.; Weber, C. R.; Milovich, J. L.; ...

    2016-03-14

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) [E. I. Moses et al., Phys. Plasmas 16, 041006 (2009)] require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensionalmore » (3D) character of the flow, accurately modeling NIF implosions remains at the edge of current simulation capabilities. This study describes the current state of progress of 3D capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. Finally, for both implosion types, the simulations show reasonable, though not perfect, agreement with the data and suggest that a reliable predictive capability is developing to guide future implosions toward ignition.« less

  1. An efficient two-stage approach for image-based FSI analysis of atherosclerotic arteries

    PubMed Central

    Rayz, Vitaliy L.; Mofrad, Mohammad R. K.; Saloner, David

    2010-01-01

    Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach. PMID:19756798

  2. Numerical Prediction of CCV in a PFI Engine using a Parallel LES Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M; Mirzaeian, Mohsen; Millo, Federico

    Cycle-to-cycle variability (CCV) is detrimental to IC engine operation and can lead to partial burn, misfire, and knock. Predicting CCV numerically is extremely challenging due to two key reasons. Firstly, high-fidelity methods such as large eddy simulation (LES) are required to accurately resolve the incylinder turbulent flowfield both spatially and temporally. Secondly, CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. Ameen et al. (Int. J. Eng. Res., 2017) developed a parallel perturbation model (PPM) approach to dissociate this long time-scale problem into several shorter timescale problems. The strategy ismore » to perform multiple single-cycle simulations in parallel by effectively perturbing the initial velocity field based on the intensity of the in-cylinder turbulence. This strategy was demonstrated for motored engine and it was shown that the mean and variance of the in-cylinder flowfield was captured reasonably well by this approach. In the present study, this PPM approach is extended to simulate the CCV in a fired port-fuel injected (PFI) SI engine. Two operating conditions are considered – a medium CCV operating case corresponding to 2500 rpm and 16 bar BMEP and a low CCV case corresponding to 4000 rpm and 12 bar BMEP. The predictions from this approach are also shown to be similar to the consecutive LES cycles. Both the consecutive and PPM LES cycles are observed to under-predict the variability in the early stage of combustion. The parallel approach slightly underpredicts the cyclic variability at all stages of combustion as compared to the consecutive LES cycles. However, it is shown that the parallel approach is able to predict the coefficient of variation (COV) of the in-cylinder pressure and burn rate related parameters with sufficient accuracy, and is also able to predict the qualitative trends in CCV with changing operating conditions. The convergence of the statistics predicted by the PPM approach with respect to the number of consecutive cycles required for each parallel simulation is also investigated. It is shown that this new approach is able to give accurate predictions of the CCV in fired engines in less than one-tenth of the time required for the conventional approach of simulating consecutive engine cycles.« less

  3. Accurate and facile determination of the index of refraction of organic thin films near the carbon 1s absorption edge.

    PubMed

    Yan, Hongping; Wang, Cheng; McCarn, Allison R; Ade, Harald

    2013-04-26

    A practical and accurate method to obtain the index of refraction, especially the decrement δ, across the carbon 1s absorption edge is demonstrated. The combination of absorption spectra scaled to the Henke atomic scattering factor database, the use of the doubly subtractive Kramers-Kronig relations, and high precision specular reflectivity measurements from thin films allow the notoriously difficult-to-measure δ to be determined with high accuracy. No independent knowledge of the film thickness or density is required. High confidence interpolation between relatively sparse measurements of δ across an absorption edge is achieved. Accurate optical constants determined by this method are expected to greatly improve the simulation and interpretation of resonant soft x-ray scattering and reflectivity data. The method is demonstrated using poly(methyl methacrylate) and should be extendable to all organic materials.

  4. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  5. Verruciform xanthoma of the penis: A rare benign lesion that simulates carcinoma.

    PubMed

    De Rose, Aldo Franco; Tosi, Mattia; Mantica, Guglielmo; Piol, Nataniele; Toncini, Carlo; Terrone, Carlo

    2016-12-30

    Verruciform xanthoma is a rare and benign condition predominantly affecting the oral cavity, but also skin and female anogenital mucosa. It can be flat, papular-warty or crateriform-cystic. Furthermore it can simulate HPV viral lesion such as condyloma and malignant neoplasia such as verrucous squamous cell carcinoma. An accurate diagnosis is important to avoid overtreatment, considering it is a benign lesion that does not require any radical treatment. We present an extremely rare case of a 64 year-old man with a small, slighty raised, gray reddish-dotted lesion on the left portion of the ventral side of his glans.

  6. Development and application of a local linearization algorithm for the integration of quaternion rate equations in real-time flight simulation problems

    NASA Technical Reports Server (NTRS)

    Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.

    1973-01-01

    High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.

  7. Simulation, measurement, and emulation of photovoltaic modules using high frequency and high power density power electronic circuits

    NASA Astrophysics Data System (ADS)

    Erkaya, Yunus

    The number of solar photovoltaic (PV) installations is growing exponentially, and to improve the energy yield and the efficiency of PV systems, it is necessary to have correct methods for simulation, measurement, and emulation. PV systems can be simulated using PV models for different configurations and technologies of PV modules. Additionally, different environmental conditions of solar irradiance, temperature, and partial shading can be incorporated in the model to accurately simulate PV systems for any given condition. The electrical measurement of PV systems both prior to and after making electrical connections is important for attaining high efficiency and reliability. Measuring PV modules using a current-voltage (I-V) curve tracer allows the installer to know whether the PV modules are 100% operational. The installed modules can be properly matched to maximize performance. Once installed, the whole system needs to be characterized similarly to detect mismatches, partial shading, or installation damage before energizing the system. This will prevent any reliability issues from the onset and ensure the system efficiency will remain high. A capacitive load is implemented in making I-V curve measurements with the goal of minimizing the curve tracer volume and cost. Additionally, the increase of measurement resolution and accuracy is possible via the use of accurate voltage and current measurement methods and accurate PV models to translate the curves to standard testing conditions. A move from mechanical relays to solid-state MOSFETs improved system reliability while significantly reducing device volume and costs. Finally, emulating PV modules is necessary for testing electrical components of a PV system. PV emulation simplifies and standardizes the tests allowing for different irradiance, temperature and partial shading levels to be easily tested. Proper emulation of PV modules requires an accurate and mathematically simple PV model that incorporates all known system variables so that any PV module can be emulated as the design requires. A non-synchronous buck converter is proposed for the emulation of a single, high-power PV module using traditional silicon devices. With the proof-of-concept working and improvements in efficiency, power density and steady-state errors made, dynamic tests were performed using an inverter connected to the PV emulator. In order to improve the dynamic characteristics, a synchronous buck converter topology is proposed along with the use of advanced GaNFET devices which resulted in very high power efficiency and improved dynamic response characteristics when emulating PV modules.

  8. Modeling unstable alcohol flooding of DNAPL-contaminated columns

    NASA Astrophysics Data System (ADS)

    Roeder, Eberhard; Falta, Ronald W.

    Alcohol flooding, consisting of injection of a mixture of alcohol and water, is one source removal technology for dense non-aqueous phase liquids (DNAPLs) currently under investigation. An existing compositional multiphase flow simulator (UTCHEM) was adapted to accurately represent the equilibrium phase behavior of ternary and quaternary alcohol/DNAPL systems. Simulator predictions were compared to laboratory column experiments and the results are presented here. It was found that several experiments involved unstable displacements of the NAPL bank by the alcohol flood or of the alcohol flood by the following water flood. Unstable displacement led to additional mixing compared to ideal displacement. This mixing was approximated by a large dispersion in one-dimensional simulations and or by including permeability heterogeneities on a very small scale in three-dimensional simulations. Three-dimensional simulations provided the best match. Simulations of unstable displacements require either high-resolution grids, or need to consider the mixing of fluids in a different manner to capture the resulting effects on NAPL recovery.

  9. An adaptive inverse kinematics algorithm for robot manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  10. Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.

    PubMed

    Dastmalchi, Pouya; Veronis, Georgios

    2013-12-30

    We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.

  11. Upper Atmosphere Research Satellite (UARS) onboard attitude determination using a Kalman filter

    NASA Technical Reports Server (NTRS)

    Garrick, Joseph

    1993-01-01

    The Upper Atmospheric Research Satellite (UARS) requires a highly accurate knowledge of its attitude to accomplish its mission. Propagation of the attitude state using gyro measurements is not sufficient to meet the accuracy requirements, and must be supplemented by a observer/compensation process to correct for dynamics and observation anomalies. The process of amending the attitude state utilizes a well known method, the discrete Kalman Filter. This study is a sensitivity analysis of the discrete Kalman Filter as implemented in the UARS Onboard Computer (OBC). The stability of the Kalman Filter used in the normal on-orbit control mode within the OBC, is investigated for the effects of corrupted observations and nonlinear errors. Also, a statistical analysis on the residuals of the Kalman Filter is performed. These analysis is based on simulations using the UARS Dynamics Simulator (UARSDSIM) and compared against attitude requirements as defined by General Electric (GE). An independent verification of expected accuracies is performed using the Attitude Determination Error Analysis System (ADEAS).

  12. Modeling of Tool-Tissue Interactions for Computer-Based Surgical Simulation: A Literature Review

    PubMed Central

    Misra, Sarthak; Ramesh, K. T.; Okamura, Allison M.

    2009-01-01

    Surgical simulators present a safe and potentially effective method for surgical training, and can also be used in robot-assisted surgery for pre- and intra-operative planning. Accurate modeling of the interaction between surgical instruments and organs has been recognized as a key requirement in the development of high-fidelity surgical simulators. Researchers have attempted to model tool-tissue interactions in a wide variety of ways, which can be broadly classified as (1) linear elasticity-based, (2) nonlinear (hyperelastic) elasticity-based finite element (FE) methods, and (3) other techniques that not based on FE methods or continuum mechanics. Realistic modeling of organ deformation requires populating the model with real tissue data (which are difficult to acquire in vivo) and simulating organ response in real time (which is computationally expensive). Further, it is challenging to account for connective tissue supporting the organ, friction, and topological changes resulting from tool-tissue interactions during invasive surgical procedures. Overcoming such obstacles will not only help us to model tool-tissue interactions in real time, but also enable realistic force feedback to the user during surgical simulation. This review paper classifies the existing research on tool-tissue interactions for surgical simulators specifically based on the modeling techniques employed and the kind of surgical operation being simulated, in order to inform and motivate future research on improved tool-tissue interaction models. PMID:20119508

  13. Hydrocode simulations of air and water shocks for facility vulnerability assessments.

    PubMed

    Clutter, J Keith; Stahl, Michael

    2004-01-02

    Hydrocodes are widely used in the study of explosive systems but their use in routine facility vulnerability assessments has been limited due to the computational resources typically required. These requirements are due to the fact that the majority of hydrocodes have been developed primarily for the simulation of weapon-scale phenomena. It is not practical to use these same numerical frameworks on the large domains found in facility vulnerability studies. Here, a hydrocode formulated specifically for facility vulnerability assessments is reviewed. Techniques used to accurately represent the explosive source while maintaining computational efficiency are described. Submodels for addressing other issues found in typical terrorist attack scenarios are presented. In terrorist attack scenarios, loads produced by shocks play an important role in vulnerability. Due to the difference in the material properties of water and air and interface phenomena, there exists significant contrast in wave propagation phenomena in these two medium. These physical variations also require special attention be paid to the mathematical and numerical models used in the hydrocodes. Simulations for a variety of air and water shock scenarios are presented to validate the computational models used in the hydrocode and highlight the phenomenological issues.

  14. High Temperature Life Testing of 80Ni-20Cr Wire in a Simulated Mars Atmosphere for the Sample Analysis at Mars (SAM) Instrument Suit Gas Processing System (GPS) Carbon Dioxide Scrubber

    NASA Technical Reports Server (NTRS)

    Gundersen, Cynthia; Hoffman, Christopher; Munoz, Bruno; Steohenson, Timothy; Thomas, Walter

    2008-01-01

    In support of the GPS for the SAM instrument suite built by GSFC, a life test facility was developed to test the suitability of 80Ni-20Cr wire, 0.0056 inches in diameter, for use as a heater element for the carbon dioxide scrubber. The wire would be required to operate at 1000 C in order to attain the 800 C required for regeneration of the getter. The wire also would need to operate in the Mars atmosphere, which consists mostly of CO2 at pressures between 4 and 12 torr. Data on the high temperature degradation mechanism of 80Ni-20Cr in low pressure CO2, together with the effects of thermal cycling, were unknown. In addition, the influence of work hardening of the wire during assembly and the potential for catastrophic grain growth also were unknown. Verification of the wire reliability as defined by the mission goals required the construction of a test facility that would accurately simulate the duty cycles in a simulated Mars atmosphere. The experimental set-up, along with the test protocol and results will be described.

  15. Design and Application of the Exploration Maintainability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Stromgren, Chel; Terry, Michelle; Crillo, William; Goodliff, Kandyce; Maxwell, Andrew

    2012-01-01

    Conducting human exploration missions beyond Low Earth Orbit (LEO) will present unique challenges in the areas of supportability and maintainability. The durations of proposed missions can be relatively long and re-supply of logistics, including maintenance and repair items, will be limited or non-existent. In addition, mass and volume constraints in the transportation system will limit the total amount of logistics that can be flown along with the crew. These constraints will require that new strategies be developed with regards to how spacecraft systems are designed and maintained. NASA is currently developing Design Reference Missions (DRMs) as an initial step in defining future human missions. These DRMs establish destinations and concepts of operation for future missions, and begin to define technology and capability requirements. Because of the unique supportability challenges, historical supportability data and models are not directly applicable for establishing requirements for beyond LEO missions. However, supportability requirements could have a major impact on the development of the DRMs. The mass, volume, and crew resources required to support the mission could all be first order drivers in the design of missions, elements, and operations. Therefore, there is a need for enhanced analysis capabilities to more accurately establish mass, volume, and time requirements for supporting beyond LEO missions. Additionally, as new technologies and operations are proposed to reduce these requirements, it is necessary to have accurate tools to evaluate the efficacy of those approaches. In order to improve the analysis of supportability requirements for beyond LEO missions, the Space Missions Analysis Branch at the NASA Langley Research Center is developing the Exploration Maintainability Analysis Tool (EMAT). This tool is a probabilistic simulator that evaluates the need for repair and maintenance activities during space missions and the logistics and crew requirements to support those activities. Using a Monte Carlo approach, the tool simulates potential failures in defined systems, based on established component reliabilities, and then evaluates the capability of the crew to repair those failures given a defined store of spares and maintenance items. Statistical analysis of Monte Carlo runs provides probabilistic estimates of overall mission safety and reliability. This paper will describe the operation of the EMAT, including historical data sources used to populate the model, simulation processes, and outputs. Analysis results are provided for a candidate exploration system, including baseline estimates of required sparing mass and volume. Sensitivity analysis regarding the effectiveness of proposed strategies to reduce mass and volume requirements and improve mission reliability is included in these results.

  16. [Evaluating the performance of species distribution models Biomod2 and MaxEnt using the giant panda distribution data].

    PubMed

    Luo, Mei; Wang, Hao; Lyu, Zhi

    2017-12-01

    Species distribution models (SDMs) are widely used by researchers and conservationists. Results of prediction from different models vary significantly, which makes users feel difficult in selecting models. In this study, we evaluated the performance of two commonly used SDMs, the Biomod2 and Maximum Entropy (MaxEnt), with real presence/absence data of giant panda, and used three indicators, i.e., area under the ROC curve (AUC), true skill statistics (TSS), and Cohen's Kappa, to evaluate the accuracy of the two model predictions. The results showed that both models could produce accurate predictions with adequate occurrence inputs and simulation repeats. Comparedto MaxEnt, Biomod2 made more accurate prediction, especially when occurrence inputs were few. However, Biomod2 was more difficult to be applied, required longer running time, and had less data processing capability. To choose the right models, users should refer to the error requirements of their objectives. MaxEnt should be considered if the error requirement was clear and both models could achieve, otherwise, we recommend the use of Biomod2 as much as possible.

  17. CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri

    2014-01-01

    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.

  18. Mars Tumbleweed Simulation Using Singular Perturbation Theory

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Calhoun, Phillip

    2005-01-01

    The Mars Tumbleweed is a new surface rover concept that utilizes Martian winds as the primary source of mobility. Several designs have been proposed for the Mars Tumbleweed, all using aerodynamic drag to generate force for traveling about the surface. The Mars Tumbleweed, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from the Martian surface. This paper discusses the dynamic simulation details of a candidate Tumbleweed design. The dynamic simulation model must properly evaluate and characterize the motion of the tumbleweed rover to support proper selection of system design parameters. Several factors, such as model flexibility, simulation run times, and model accuracy needed to be considered in modeling assumptions. The simulation was required to address the flexibility of the rover and its interaction with the ground, and properly evaluate its mobility. Proper assumptions needed to be made such that the simulated dynamic motion is accurate and realistic while not overly burdened by long simulation run times. This paper also shows results that provided reasonable correlation between the simulation and a drop/roll test of a tumbleweed prototype.

  19. Simulating cosmologies beyond ΛCDM with PINOCCHIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzo, Luca A.; Villaescusa-Navarro, Francisco; Monaco, Pierluigi

    2017-01-01

    We present a method that extends the capabilities of the PINpointing Orbit-Crossing Collapsed HIerarchical Objects (PINOCCHIO) code, allowing it to generate accurate dark matter halo mock catalogues in cosmological models where the linear growth factor and the growth rate depend on scale. Such cosmologies comprise, among others, models with massive neutrinos and some classes of modified gravity theories. We validate the code by comparing the halo properties from PINOCCHIO against N-body simulations, focusing on cosmologies with massive neutrinos: νΛCDM. We analyse the halo mass function, halo two-point correlation function and halo power spectrum, showing that PINOCCHIO reproduces the results frommore » simulations with the same level of precision as the original code (∼ 5–10%). We demonstrate that the abundance of halos in cosmologies with massless and massive neutrinos from PINOCCHIO matches very well the outcome of simulations, and point out that PINOCCHIO can reproduce the Ω{sub ν}–σ{sub 8} degeneracy that affects the halo mass function. We finally show that the clustering properties of the halos from PINOCCHIO matches accurately those from simulations both in real and redshift-space, in the latter case up to k = 0.3 h Mpc{sup −1}. We emphasize that the computational time required by PINOCCHIO to generate mock halo catalogues is orders of magnitude lower than the one needed for N-body simulations. This makes this tool ideal for applications like covariance matrix studies within the standard ΛCDM model but also in cosmologies with massive neutrinos or some modified gravity theories.« less

  20. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    PubMed

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  1. Development of hardware accelerator for molecular dynamics simulations: a computation board that calculates nonbonded interactions in cooperation with fast multipole method.

    PubMed

    Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro

    2003-04-15

    Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003

  2. SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.

    PubMed

    Nik, S J; Thing, R S; Watts, R; Meyer, J

    2012-06-01

    To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.

  3. Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David

    1995-01-01

    Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.

  4. VEDA: a web-based virtual environment for dynamic atomic force microscopy.

    PubMed

    Melcher, John; Hu, Shuiqing; Raman, Arvind

    2008-06-01

    We describe here the theory and applications of virtual environment dynamic atomic force microscopy (VEDA), a suite of state-of-the-art simulation tools deployed on nanoHUB (www.nanohub.org) for the accurate simulation of tip motion in dynamic atomic force microscopy (dAFM) over organic and inorganic samples. VEDA takes advantage of nanoHUB's cyberinfrastructure to run high-fidelity dAFM tip dynamics computations on local clusters and the teragrid. Consequently, these tools are freely accessible and the dAFM simulations are run using standard web-based browsers without requiring additional software. A wide range of issues in dAFM ranging from optimal probe choice, probe stability, and tip-sample interaction forces, power dissipation, to material property extraction and scanning dynamics over hetereogeneous samples can be addressed.

  5. Invited Article: VEDA: A web-based virtual environment for dynamic atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Melcher, John; Hu, Shuiqing; Raman, Arvind

    2008-06-01

    We describe here the theory and applications of virtual environment dynamic atomic force microscopy (VEDA), a suite of state-of-the-art simulation tools deployed on nanoHUB (www.nanohub.org) for the accurate simulation of tip motion in dynamic atomic force microscopy (dAFM) over organic and inorganic samples. VEDA takes advantage of nanoHUB's cyberinfrastructure to run high-fidelity dAFM tip dynamics computations on local clusters and the teragrid. Consequently, these tools are freely accessible and the dAFM simulations are run using standard web-based browsers without requiring additional software. A wide range of issues in dAFM ranging from optimal probe choice, probe stability, and tip-sample interaction forces, power dissipation, to material property extraction and scanning dynamics over hetereogeneous samples can be addressed.

  6. Using Real and Simulated TNOs to Constrain the Outer Solar System

    NASA Astrophysics Data System (ADS)

    Kaib, Nathan

    2018-04-01

    Over the past 2-3 decades our understanding of the outer solar system’s history and current state has evolved dramatically. An explosion in the number of detected trans-Neptunian objects (TNOs) coupled with simultaneous advances in numerical models of orbital dynamics has driven this rapid evolution. However, successfully constraining the orbital architecture and evolution of the outer solar system requires accurately comparing simulation results with observational datasets. This process is challenging because observed datasets are influenced by orbital discovery biases as well as TNO size and albedo distributions. Meanwhile, such influences are generally absent from numerical results. Here I will review recent work I and others have undertaken using numerical simulations in concert with catalogs of observed TNOs to constrain the outer solar system’s current orbital architecture and past evolution.

  7. Three-Dimensional Modeling of Low-Mode Asymmetries in OMEGA Cryogenic Implosions

    NASA Astrophysics Data System (ADS)

    Anderson, K. S.; McKenty, P. W.; Shvydky, A.; Collins, T. J. B.; Forrest, C. J.; Knauer, J. P.; Marozas, J. A.; Marshall, F. J.; Radha, P. B.; Sefkow, A. B.; Marinak, M. M.

    2017-10-01

    In direct-drive inertial confinement fusion implosions, long-wavelength asymmetries resulting from target offset, laser power imbalance, beam mispointing, etc. can be highly detrimental to target performance. Characterizing the effects of these asymmetry sources requires 3-D simulations performed in full-sphere geometry to accurately capture the evolution of shell perturbations and hot-spot flow. This paper will present 3-D HYDRA simulations characterizing the impact of these perturbation sources on yield and shell modulation. Various simulated observables are generated, and trends are analyzed and compared with experimental data. This material is based on work supported by the Department of Energy National Nuclear Security Administration under Award Numbers DE-NA0001944 and performed under the auspices of the LLNL under Contract No. DE-AC52-07NA27344.

  8. Effectiveness of “Thin-Layer” and “Effective Medium” Approximations in Numerical Simulation of Dielectric Spectra of Biological Cell Suspensions

    NASA Astrophysics Data System (ADS)

    Asami, Koji

    2010-12-01

    There are a few concerns in dielectric modeling of biological cells by the finite-element method (FEM) to simulate their dielectric spectra. Cells possess thin plasma membranes and membrane-bound intracellular organelles, requiring extra fine meshes and considerable computational tasks in the simulation. To solve the problems, the “thin-layer” approximation (TLA) and the “effective medium” approximation (EMA) were adopted. TLA deals with the membrane as an interface of the specific membrane impedance, and therefore it is not necessary to divide the membrane region. EMA regards the composite cytoplasm as an effective homogeneous phase whose dielectric properties are calculated separately. It was proved that TLA and EMA were both useful for greatly reducing computational tasks while accurately coinciding with analytical solutions.

  9. A Quantitative Investigation of Entrainment and Detrainment in Numerically Simulated Convective Clouds. Pt. 1; Model Development

    NASA Technical Reports Server (NTRS)

    Cohen, Charles

    1998-01-01

    A method is developed which uses numerical tracers to make accurate diagnoses of entraimnent and detrainment rates and of the properties of the entrained and detrained air in numerically simulated clouds. The numerical advection scheme is modified to make it nondispersive, as required by the use of the tracers. Tests of the new method are made, and an appropriate definition of clouds is selected. Distributions of mixing fractions in the model consistently show maximums at the end points, for nearly undilute environmental air or nearly undilute cloud air, with a uniform distribution between. The cumulonimbus clouds simulated here entrain air that had been substantially changed by the clouds, and detrained air that is not necessarily representative of the cloud air at the same level.

  10. A geometric initial guess for localized electronic orbitals in modular biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P. G.; Fattebert, J. L.; Lau, E. Y.

    Recent first-principles molecular dynamics algorithms using localized electronic orbitals have achieved O(N) complexity and controlled accuracy in simulating systems with finite band gaps. However, accurately deter- mining the centers of these localized orbitals during simulation setup may require O(N 3) operations, which is computationally infeasible for many biological systems. We present an O(N) approach for approximating orbital centers in proteins, DNA, and RNA which uses non-localized solutions for a set of fixed-size subproblems to create a set of geometric maps applicable to larger systems. This scalable approach, used as an initial guess in the O(N) first-principles molecular dynamics code MGmol,more » facilitates first-principles simulations in biological systems of sizes which were previously impossible.« less

  11. Simulation of Satellite, Airborne and Terrestrial LiDAR with DART (I):Waveform Simulation with Quasi-Monte Carlo Ray Tracing

    NASA Technical Reports Server (NTRS)

    Gastellu-Etchegorry, Jean-Philippe; Yin, Tiangang; Lauret, Nicolas; Grau, Eloi; Rubio, Jeremy; Cook, Bruce D.; Morton, Douglas C.; Sun, Guoqing

    2016-01-01

    Light Detection And Ranging (LiDAR) provides unique data on the 3-D structure of atmosphere constituents and the Earth's surface. Simulating LiDAR returns for different laser technologies and Earth scenes is fundamental for evaluating and interpreting signal and noise in LiDAR data. Different types of models are capable of simulating LiDAR waveforms of Earth surfaces. Semi-empirical and geometric models can be imprecise because they rely on simplified simulations of Earth surfaces and light interaction mechanisms. On the other hand, Monte Carlo ray tracing (MCRT) models are potentially accurate but require long computational time. Here, we present a new LiDAR waveform simulation tool that is based on the introduction of a quasi-Monte Carlo ray tracing approach in the Discrete Anisotropic Radiative Transfer (DART) model. Two new approaches, the so-called "box method" and "Ray Carlo method", are implemented to provide robust and accurate simulations of LiDAR waveforms for any landscape, atmosphere and LiDAR sensor configuration (view direction, footprint size, pulse characteristics, etc.). The box method accelerates the selection of the scattering direction of a photon in the presence of scatterers with non-invertible phase function. The Ray Carlo method brings traditional ray-tracking into MCRT simulation, which makes computational time independent of LiDAR field of view (FOV) and reception solid angle. Both methods are fast enough for simulating multi-pulse acquisition. Sensitivity studies with various landscapes and atmosphere constituents are presented, and the simulated LiDAR signals compare favorably with their associated reflectance images and Laser Vegetation Imaging Sensor (LVIS) waveforms. The LiDAR module is fully integrated into DART, enabling more detailed simulations of LiDAR sensitivity to specific scene elements (e.g., atmospheric aerosols, leaf area, branches, or topography) and sensor configuration for airborne or satellite LiDAR sensors.

  12. Generating an Aerodynamic Model for Projectile Flight Simulation Using Unsteady, Time Accurate Computational Fluid Dynamic Results

    DTIC Science & Technology

    2006-09-01

    Figure 17. Station line center of Magnus force vs. Mach number for spin-stabilized projectile...forces and moments on the projectile. It is also relatively easy to change the wind tunnel model to allow detailed parametric effects to be...such as pitch and roll damping, as well as, Magnus force and moment coefficients, are difficult to obtain in a wind tunnel and require a complex

  13. LES Investigation of Wake Development in a Transonic Fan Stage for Aeroacoustic Analysis

    NASA Technical Reports Server (NTRS)

    Hah, Chunill; Romeo, Michael

    2017-01-01

    Detailed development of the rotor wake and its interaction with the stator are investigated with a large eddy simulation (LES). Typical steady and unsteady Navier-Stokes approaches (RANS and URANS) do not calculate wake development accurately and do not provide all the necessary information for an aeroacoustic analysis. It is generally believed that higher fidelity analysis tools are required for an aeroacoustic investigation of transonic fan stages.

  14. Distributed Mobile Device Based Shooter Detection Simulation

    DTIC Science & Technology

    2013-09-01

    three signatures of a gunshot ( muzzle flash [optical], muzzle blast [auditory], and shock wave [auditory]), we focus only on information from the...bullet, while this proximity is important when using information from the shock wave. Detecting and using the muzzle flash would require accurate...Additionally, the mobile device would need to be aimed towards the blast to even have a chance detect the muzzle flash . 2.1 Single Microphone When a sound is

  15. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  16. Simulation of nanoparticle-mediated near-infrared thermal therapy using GATE

    PubMed Central

    Cuplov, Vesna; Pain, Frédéric; Jan, Sébastien

    2017-01-01

    Application of nanotechnology for biomedicine in cancer therapy allows for direct delivery of anticancer agents to tumors. An example of such therapies is the nanoparticle-mediated near-infrared hyperthermia treatment. In order to investigate the influence of nanoparticle properties on the spatial distribution of heat in the tumor and healthy tissues, accurate simulations are required. The Geant4 Application for Emission Tomography (GATE) open-source simulation platform, based on the Geant4 toolkit, is widely used by the research community involved in molecular imaging, radiotherapy and optical imaging. We present an extension of GATE that can model nanoparticle-mediated hyperthermal therapy as well as simple heat diffusion in biological tissues. This new feature of GATE combined with optical imaging allows for the simulation of a theranostic scenario in which the patient is injected with theranostic nanosystems that can simultaneously deliver therapeutic (i.e. hyperthermia therapy) and imaging agents (i.e. fluorescence imaging). PMID:28663855

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Bugbee, Bruce; Gotseff, Peter

    Capturing technical and economic impacts of solar photovoltaics (PV) and other distributed energy resources (DERs) on electric distribution systems can require high-time resolution (e.g. 1 minute), long-duration (e.g. 1 year) simulations. However, such simulations can be computationally prohibitive, particularly when including complex control schemes in quasi-steady-state time series (QSTS) simulation. Various approaches have been used in the literature to down select representative time segments (e.g. days), but typically these are best suited for lower time resolutions or consider only a single data stream (e.g. PV production) for selection. We present a statistical approach that combines stratified sampling and bootstrapping tomore » select representative days while also providing a simple method to reassemble annual results. We describe the approach in the context of a recent study with a utility partner. This approach enables much faster QSTS analysis by simulating only a subset of days, while maintaining accurate annual estimates.« less

  18. A mathematical analysis of the Janus combat simulation weather effects models and sensitivity analysis of sky-to-ground brightness ratio on target detection

    NASA Astrophysics Data System (ADS)

    Shorts, Vincient F.

    1994-09-01

    The Janus combat simulation offers the user a wide variety of weather effects options to employ during the execution of any simulation run, which can directly influence detection of opposing forces. Realistic weather effects are required if the simulation is to accurately reproduce 'real world' results. This thesis examines the mathematics of the Janus weather effects models. A weather effect option in Janus is the sky-to-ground brightness ratio (SGR). SGR affects an optical sensor's ability to detect targets. It is a measure of the sun angle in relation to the horizon. A review of the derivation of SGR is performed and an analysis of SGR's affect on the number of optical detections and detection ranges is performed using an unmanned aerial vehicle (UAV) search scenario. For comparison, the UAV's are equipped with a combination of optical and thermal sensors.

  19. Explicit symplectic algorithms based on generating functions for relativistic charged particle dynamics in time-dependent electromagnetic field

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa

    2018-02-01

    Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.

  20. A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.

    We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less

  1. Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu

    2014-09-11

    The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks,more » between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.« less

  2. The diffusive finite state projection algorithm for efficient simulation of the stochastic reaction-diffusion master equation.

    PubMed

    Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa

    2010-02-21

    We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.

  3. BFEE: A User-Friendly Graphical Interface Facilitating Absolute Binding Free-Energy Calculations.

    PubMed

    Fu, Haohao; Gumbart, James C; Chen, Haochuan; Shao, Xueguang; Cai, Wensheng; Chipot, Christophe

    2018-03-26

    Quantifying protein-ligand binding has attracted the attention of both theorists and experimentalists for decades. Many methods for estimating binding free energies in silico have been reported in recent years. Proper use of the proposed strategies requires, however, adequate knowledge of the protein-ligand complex, the mathematical background for deriving the underlying theory, and time for setting up the simulations, bookkeeping, and postprocessing. Here, to minimize human intervention, we propose a toolkit aimed at facilitating the accurate estimation of standard binding free energies using a geometrical route, coined the binding free-energy estimator (BFEE), and introduced it as a plug-in of the popular visualization program VMD. Benefitting from recent developments in new collective variables, BFEE can be used to generate the simulation input files, based solely on the structure of the complex. Once the simulations are completed, BFEE can also be utilized to perform the post-treatment of the free-energy calculations, allowing the absolute binding free energy to be estimated directly from the one-dimensional potentials of mean force in simulation outputs. The minimal amount of human intervention required during the whole process combined with the ergonomic graphical interface makes BFEE a very effective and practical tool for the end-user.

  4. Biomechanics Simulations Using Cubic Hermite Meshes with Extraordinary Nodes for Isogeometric Cardiac Modeling

    PubMed Central

    Gonzales, Matthew J.; Sturgeon, Gregory; Segars, W. Paul; McCulloch, Andrew D.

    2016-01-01

    Cubic Hermite hexahedral finite element meshes have some well-known advantages over linear tetrahedral finite element meshes in biomechanical and anatomic modeling using isogeometric analysis. These include faster convergence rates as well as the ability to easily model rule-based anatomic features such as cardiac fiber directions. However, it is not possible to create closed complex objects with only regular nodes; these objects require the presence of extraordinary nodes (nodes with 3 or >= 5 adjacent elements in 2D) in the mesh. The presence of extraordinary nodes requires new constraints on the derivatives of adjacent elements to maintain continuity. We have developed a new method that uses an ensemble coordinate frame at the nodes and a local-to-global mapping to maintain continuity. In this paper, we make use of this mapping to create cubic Hermite models of the human ventricles and a four-chamber heart. We also extend the methods to the finite element equations to perform biomechanics simulations using these meshes. The new methods are validated using simple test models and applied to anatomically accurate ventricular meshes with valve annuli to simulate complete cardiac cycle simulations. PMID:27182096

  5. A texture-based framework for improving CFD data visualization in a virtual environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bivins, Gerrick O'Ron

    2005-01-01

    In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated hut require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, ~10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions are notmore » limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions hut limiting interaction for investigating the field.« less

  6. A texture-based frameowrk for improving CFD data visualization in a virtual environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bivins, Gerrick O'Ron

    2005-01-01

    In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated but require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, ~ 10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions aremore » not limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions but limiting interaction for investigating the field.« less

  7. A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes

    With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less

  8. Figures of Merit for Control Verification

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Goesu. Daniel P.

    2008-01-01

    This paper proposes a methodology for evaluating a controller's ability to satisfy a set of closed-loop specifications when the plant has an arbitrary functional dependency on uncertain parameters. Control verification metrics applicable to deterministic and probabilistic uncertainty models are proposed. These metrics, which result from sizing the largest uncertainty set of a given class for which the specifications are satisfied, enable systematic assessment of competing control alternatives regardless of the methods used to derive them. A particularly attractive feature of the tools derived is that their efficiency and accuracy do not depend on the robustness of the controller. This is in sharp contrast to Monte Carlo based methods where the number of simulations required to accurately approximate the failure probability grows exponentially with its closeness to zero. This framework allows for the integration of complex, high-fidelity simulations of the integrated system and only requires standard optimization algorithms for its implementation.

  9. LigParGen web server: an automatic OPLS-AA parameter generator for organic ligands

    PubMed Central

    Dodda, Leela S.

    2017-01-01

    Abstract The accurate calculation of protein/nucleic acid–ligand interactions or condensed phase properties by force field-based methods require a precise description of the energetics of intermolecular interactions. Despite the progress made in force fields, small molecule parameterization remains an open problem due to the magnitude of the chemical space; the most critical issue is the estimation of a balanced set of atomic charges with the ability to reproduce experimental properties. The LigParGen web server provides an intuitive interface for generating OPLS-AA/1.14*CM1A(-LBCC) force field parameters for organic ligands, in the formats of commonly used molecular dynamics and Monte Carlo simulation packages. This server has high value for researchers interested in studying any phenomena based on intermolecular interactions with ligands via molecular mechanics simulations. It is free and open to all at jorgensenresearch.com/ligpargen, and has no login requirements. PMID:28444340

  10. A summary of computational experience at GE Aircraft Engines for complex turbulent flows in gas turbines

    NASA Astrophysics Data System (ADS)

    Zerkle, Ronald D.; Prakash, Chander

    1995-03-01

    This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.

  11. A summary of computational experience at GE Aircraft Engines for complex turbulent flows in gas turbines

    NASA Technical Reports Server (NTRS)

    Zerkle, Ronald D.; Prakash, Chander

    1995-01-01

    This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.

  12. Procedures for using signals from one sensor as substitutes for signals of another

    NASA Technical Reports Server (NTRS)

    Suits, G.; Malila, W.; Weller, T.

    1988-01-01

    Long-term monitoring of surface conditions may require a transfer from using data from one satellite sensor to data from a different sensor having different spectral characteristics. Two general procedures for spectral signal substitution are described in this paper, a principal-components procedure and a complete multivariate regression procedure. They are evaluated through a simulation study of five satellite sensors (MSS, TM, AVHRR, CZCS, and HRV). For illustration, they are compared to another recently described procedure for relating AVHRR and MSS signals. The multivariate regression procedure is shown to be best. TM can accurately emulate the other sensors, but they, on the other hand, have difficulty in accurately emulating its shortwave infrared bands (TM5 and TM7).

  13. Using Combined Marine Spatial Planning Tools and Observing System Experiments to define Gaps in the Emerging European Ocean Observing System.

    NASA Astrophysics Data System (ADS)

    Nolan, G.; Pinardi, N.; Vukicevic, T.; Le Traon, P. Y.; Fernandez, V.

    2016-02-01

    Ocean observations are critical to providing accurate ocean forecasts that support operational decision making in European open and coastal seas. Observations are available in many forms from Fixed platforms e.g. Moored Buoys and tide gauges, underway measurements from Ferrybox systems, High Frequency radars and more recently from underwater Gliders and profiling floats. Observing System Simulation Experiments have been conducted to examine the relative contribution of each type of platform to an improvement in our ability to accurately forecast the future state of the ocean with HF radar and Gliders showing particular promise in improving model skill. There is considerable demand for ecosystem products and services from today's ocean observing system and biogeochemical observations are still relatively sparse particularly in coastal and shelf seas. There is a need to widen the techniques used to assess the fitness for purpose and gaps in the ocean observing system. As well as Observing System Simulation Experiments that quantify the effect of observations on the overall model skill we present a gap analysis based on (1) Examining where high model skill is required based on a marine spatial planning analysis of European seas i.e where does activity take place that requires more accurate forecasts? and (2) assessing gaps based on the capacity of the observing system to answer key societal challenges e.g. site suitability for aquaculture and ocean energy, oil spill response and contextual oceanographic products for fisheries and ecosystems. The broad based analysis will inform the development of the proposed European Ocean Observing System as a contribution to the Global Ocean Observing System (GOOS).

  14. Development of the Next Generation of Biogeochemistry Simulations Using EMSL's NWChem Molecular Modeling Software

    NASA Astrophysics Data System (ADS)

    Bylaska, E. J.; Kowalski, K.; Apra, E.; Govind, N.; Valiev, M.

    2017-12-01

    Methods of directly simulating the behavior of complex strongly interacting atomic systems (molecular dynamics, Monte Carlo) have provided important insight into the behavior of nanoparticles, biogeochemical systems, mineral/fluid systems, nanoparticles, actinide systems and geofluids. The limitation of these methods to even wider applications is the difficulty of developing accurate potential interactions in these systems at the molecular level that capture their complex chemistry. The well-developed tools of quantum chemistry and physics have been shown to approach the accuracy required. However, despite the continuous effort being put into improving their accuracy and efficiency, these tools will be of little value to condensed matter problems without continued improvements in techniques to traverse and sample the high-dimensional phase space needed to span the ˜10^12 time scale differences between molecular simulation and chemical events. In recent years, we have made considerable progress in developing electronic structure and AIMD methods tailored to treat biochemical and geochemical problems, including very efficient implementations of many-body methods, fast exact exchange methods, electron-transfer methods, excited state methods, QM/MM, and new parallel algorithms that scale to +100,000 cores. The poster will focus on the fundamentals of these methods and the realities in terms of system size, computational requirements and simulation times that are required for their application to complex biogeochemical systems.

  15. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    NASA Astrophysics Data System (ADS)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.

  16. TOPAS Tool for Particle Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perl, Joseph

    2013-05-30

    TOPAS lets users simulate the passage of subatomic particles moving through any kind of radiation therapy treatment system, can import a patient geometry, can record dose and other quantities, has advanced graphics, and is fully four-dimensional (3D plus time) to handle the most challenging time-dependent aspects of modern cancer treatments.TOPAS unlocks the power of the most accurate particle transport simulation technique, the Monte Carlo (MC) method, while removing the painstaking coding work such methods used to require. Research physicists can use TOPAS to improve delivery systems towards safer and more effective radiation therapy treatments, easily setting up and running complexmore » simulations that previously used to take months of preparation. Clinical physicists can use TOPAS to increase accuracy while reducing side effects, simulating patient-specific treatment plans at the touch of a button. TOPAS is designed as a “user code” layered on top of the Geant4 Simulation Toolkit. TOPAS includes the standard Geant4 toolkit, plus additional code to make Geant4 easier to control and to extend Geant4 functionality. TOPAS aims to make proton simulation both “reliable” and “repeatable.” “Reliable” means both accurate physics and a high likelihood to simulate precisely what the user intended to simulate, reducing issues of wrong units, wrong materials, wrong scoring locations, etc. “Repeatable” means not just getting the same result from one simulation to another, but being able to easily restore a previously used setup and reducing sources of error when a setup is passed from one user to another. TOPAS control system incorporates key lessons from safety management, proactively removing possible sources of user error such as line-ordering mistakes In control files. TOPAS has been used to model proton therapy treatment examples including the UCSF eye treatment head, the MGH stereotactic alignment in radiosurgery treatment head and the MGH gantry treatment heads in passive scattering and scanning modes, and has demonstrated dose calculation based on patient-specific CT data.« less

  17. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  18. 3D-printed tracheoesophageal puncture and prosthesis placement simulator.

    PubMed

    Barber, Samuel R; Kozin, Elliott D; Naunheim, Matthew R; Sethi, Rosh; Remenschneider, Aaron K; Deschler, Daniel G

    A tracheoesophageal prosthesis (TEP) allows for speech after total laryngectomy. However, TEP placement is technically challenging, requiring a coordinated series of steps. Surgical simulators improve technical skills and reduce operative time. We hypothesize that a reusable 3-dimensional (3D)-printed TEP simulator will facilitate comprehension and rehearsal prior to actual procedures. The simulator was designed using Fusion360 (Autodesk, San Rafael, CA). Components were 3D-printed in-house using an Ultimaker 2+ (Ultimaker, Netherlands). Squid simulated the common tracheoesophageal wall. A Blom-Singer TEP (InHealth Technologies, Carpinteria, CA) replicated placement. Subjects watched an instructional video and completed pre- and post-simulation surveys. The simulator comprised 3D-printed parts: the esophageal lumen and superficial stoma. Squid was placed between components. Ten trainees participated. Significant differences existed between junior and senior residents with surveys regarding anatomy knowledge(p<0.05), technical details(p<0.01), and equipment setup(p<0.01). Subjects agreed that simulation felt accurate, and rehearsal raised confidence in future procedures. A 3D-printed TEP simulator is feasible for surgical training. Simulation involving multiple steps may accelerate technical skills and improve education. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Simulation of the National Aerospace System for Safety Analysis

    NASA Technical Reports Server (NTRS)

    Pritchett, Amy; Goldsman, Dave; Statler, Irv (Technical Monitor)

    2002-01-01

    Work started on this project on January 1, 1999, the first year of the grant. Following the outline of the grant proposal, a simulator architecture has been established which can incorporate the variety of types of models needed to accurately simulate national airspace dynamics. For the sake of efficiency, this architecture was based on an established single-aircraft flight simulator, the Reconfigurable Flight Simulator (RFS), already developed at Georgia Tech. Likewise, in the first year substantive changes and additions were made to the RFS to convert it into a simulation of the National Airspace System, with the flexibility to incorporate many types of models: aircraft models; controller models; airspace configuration generators; discrete event generators; embedded statistical functions; and display and data outputs. The architecture has been developed with the capability to accept any models of these types; due to its object-oriented structure, individual simulator components can be added and removed during run-time, and can be compiled separately. Simulation objects from other projects should be easy to convert to meet architecture requirements, with the intent that both this project may now be able to incorporate established simulation components from other projects, and that other projects may easily use this simulation without significant time investment.

  20. The Need for High Fidelity Lunar Regolith Simulants

    NASA Technical Reports Server (NTRS)

    Gaier, James R.

    2008-01-01

    The case is made for the need to have high fidelity lunar regolith simulants to verify the performance of structures, mechanisms, and processes to be used on the lunar surface. Minor constituents will in some cases have major consequences. Small amounts of sulfur in the regolith can poison catalysts, and metallic iron on the surface of nano-sized dust particles may cause a dramatic increase in its toxicity. So the definition of a high fidelity simulant is application-dependent. For example, in situ resource utilization will require high fidelity in chemistry, meaning careful attention to the minor components and phases; but some other applications, such as the abrasive effects on suit fabrics, might be relatively insensitive to minor component chemistry while abrasion of some metal components may be highly dependent on trace components. The lunar environment itself will change the surface chemistry of the simulant, so to have a high fidelity simulant it must be used in a high fidelity simulated environment to get an accurate simulation. Research must be conducted to determine how sensitive technologies will be to minor components and environmental factors before they can be dismissed as unimportant.

  1. Stochastic Partial Differential Equation Solver for Hydroacoustic Modeling: Improvements to Paracousti Sound Propagation Solver

    NASA Astrophysics Data System (ADS)

    Preston, L. A.

    2017-12-01

    Marine hydrokinetic (MHK) devices offer a clean, renewable alternative energy source for the future. Responsible utilization of MHK devices, however, requires that the effects of acoustic noise produced by these devices on marine life and marine-related human activities be well understood. Paracousti is a 3-D full waveform acoustic modeling suite that can accurately propagate MHK noise signals in the complex bathymetry found in the near-shore to open ocean environment and considers real properties of the seabed, water column, and air-surface interface. However, this is a deterministic simulation that assumes the environment and source are exactly known. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected noise levels within the marine environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. One method is to use Monte Carlo (MC) techniques where simulation results from a large number of deterministic solutions are aggregated to provide statistical properties of the output signal. However, MC methods can be computationally prohibitive since they can require tens of thousands or more simulations to build up an accurate representation of those statistical properties. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a small fraction of the computational cost of MC. We are developing a SPDE solver for the 3-D acoustic wave propagation problem called Paracousti-UQ to help regulators and operators assess the statistical properties of environmental noise produced by MHK devices. In this presentation, we present the SPDE method and compare statistical distributions of simulated acoustic signals in simple models to MC simulations to show the accuracy and efficiency of the SPDE method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  2. Embedded ensemble propagation for improving performance, portability, and scalability of uncertainty quantification on emerging computational architectures

    DOE PAGES

    Phipps, Eric T.; D'Elia, Marta; Edwards, Harold C.; ...

    2017-04-18

    In this study, quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in anmore » embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).« less

  3. CFD Fuel Slosh Modeling of Fluid-Structure Interaction in Spacecraft Propellant Tanks with Diaphragms

    NASA Technical Reports Server (NTRS)

    Sances, Dillon J.; Gangadharan, Sathya N.; Sudermann, James E.; Marsell, Brandon

    2010-01-01

    Liquid sloshing within spacecraft propellant tanks causes rapid energy dissipation at resonant modes, which can result in attitude destabilization of the vehicle. Identifying resonant slosh modes currently requires experimental testing and mechanical pendulum analogs to characterize the slosh dynamics. Computational Fluid Dynamics (CFD) techniques have recently been validated as an effective tool for simulating fuel slosh within free-surface propellant tanks. Propellant tanks often incorporate an internal flexible diaphragm to separate ullage and propellant which increases modeling complexity. A coupled fluid-structure CFD model is required to capture the damping effects of a flexible diaphragm on the propellant. ANSYS multidisciplinary engineering software employs a coupled solver for analyzing two-way Fluid Structure Interaction (FSI) cases such as the diaphragm propellant tank system. Slosh models generated by ANSYS software are validated by experimental lateral slosh test results. Accurate data correlation would produce an innovative technique for modeling fuel slosh within diaphragm tanks and provide an accurate and efficient tool for identifying resonant modes and the slosh dynamic response.

  4. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm

    PubMed Central

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895

  5. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    PubMed

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  6. Refined Zigzag Theory for Laminated Composite and Sandwich Plates

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco

    2009-01-01

    A refined zigzag theory is presented for laminated-composite and sandwich plates that includes the kinematics of first-order shear deformation theory as its baseline. The theory is variationally consistent and is derived from the virtual work principle. Novel piecewise-linear zigzag functions that provide a more realistic representation of the deformation states of transverse-shear-flexible plates than other similar theories are used. The formulation does not enforce full continuity of the transverse shear stresses across the plate s thickness, yet is robust. Transverse-shear correction factors are not required to yield accurate results. The theory is devoid of the shortcomings inherent in the previous zigzag theories including shear-force inconsistency and difficulties in simulating clamped boundary conditions, which have greatly limited the accuracy of these theories. This new theory requires only C(sup 0)-continuous kinematic approximations and is perfectly suited for developing computationally efficient finite elements. The theory should be useful for obtaining relatively efficient, accurate estimates of structural response needed to design high-performance load-bearing aerospace structures.

  7. MATLAB/Simulink Pulse-Echo Ultrasound System Simulator Based on Experimentally Validated Models.

    PubMed

    Kim, Taehoon; Shin, Sangmin; Lee, Hyongmin; Lee, Hyunsook; Kim, Heewon; Shin, Eunhee; Kim, Suhwan

    2016-02-01

    A flexible clinical ultrasound system must operate with different transducers, which have characteristic impulse responses and widely varying impedances. The impulse response determines the shape of the high-voltage pulse that is transmitted and the specifications of the front-end electronics that receive the echo; the impedance determines the specification of the matching network through which the transducer is connected. System-level optimization of these subsystems requires accurate modeling of pulse-echo (two-way) response, which in turn demands a unified simulation of the ultrasonics and electronics. In this paper, this is realized by combining MATLAB/Simulink models of the high-voltage transmitter, the transmission interface, the acoustic subsystem which includes wave propagation and reflection, the receiving interface, and the front-end receiver. To demonstrate the effectiveness of our simulator, the models are experimentally validated by comparing the simulation results with the measured data from a commercial ultrasound system. This simulator could be used to quickly provide system-level feedback for an optimized tuning of electronic design parameters.

  8. Occupational exposure of personnel operating military radio equipment: measurements and simulation.

    PubMed

    Paljanos, Annamaria; Miclaus, Simona; Munteanu, Calin

    2015-09-01

    Technical literature provides numerous studies concerning radiofrequency exposure measurements for various radio communication devices, but there are few studies related to exposure of personnel operating military radio equipment. In order to evaluate exposure and identify cases when safety requirements are not entirely met, both measurements and simulations are needed for accurate results. Moreover, given the technical characteristics of the radio devices used in the military, personnel mainly operate in the near-field region so both measurements and simulation becomes more complex. Measurements were made in situ using a broadband personal exposimeter equipped with two isotropic probes for both electric and magnetic components of the field. The experiment was designed for three different operating frequencies of the same radio equipment, while simulations were made in FEKO software using hybrid numerical methods to solve complex electromagnetic field problems. The paper aims to discuss the comparative results of the measurements and simulation, as well as comparing them to reference levels specified in military or civilian radiofrequency exposure standards.

  9. RANS Simulation (Virtual Blade Model [VBM]) of Single Full Scale DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Aliseda, Alberto

    2013-04-10

    Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for Reynolds Averaged Navier-Stokes (RANS) simulation of a single full scale DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. In this case study the flow field around and in the wake of the full scale DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device.

  10. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less

  11. An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, R; Stolken, J; Jannetti, C

    Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numericalmore » simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.« less

  12. Investigation of in-body path loss in different human subjects for localization of capsule endoscope.

    PubMed

    Ara, Perzila; Cheng, Shaokoon; Heimlich, Michael; Dutkiewicz, Eryk

    2015-01-01

    Recent developments in capsule endoscopy have highlighted the need for accurate techniques to estimate the location of a capsule endoscope. A highly accurate location estimation of a capsule endoscope in the gastrointestinal (GI) tract in the range of several millimeters is a challenging task. This is mainly because the radio-frequency signals encounter high loss and a highly dynamic channel propagation environment. Therefore, an accurate path-loss model is required for the development of accurate localization algorithms. This paper presents an in-body path-loss model for the human abdomen region at 2.4 GHz frequency. To develop the path-loss model, electromagnetic simulations using the Finite-Difference Time-Domain (FDTD) method were carried out on two different anatomical human models. A mathematical expression for the path-loss model was proposed based on analysis of the measured loss at different capsule locations inside the small intestine. The proposed path-loss model is a good approximation to model in-body RF propagation, since the real measurements are quite infeasible for the capsule endoscopy subject.

  13. Investigation of Advanced Dose Verification Techniques for External Beam Radiation Treatment

    NASA Astrophysics Data System (ADS)

    Asuni, Ganiyu Adeniyi

    Intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) have been introduced in radiation therapy to achieve highly conformal dose distributions around the tumour while minimizing dose to surrounding normal tissues. These techniques have increased the need for comprehensive quality assurance tests, to verify that customized patient treatment plans are accurately delivered during treatment. in vivo dose verification, performed during treatment delivery, confirms that the actual dose delivered is the same as the prescribed dose, helping to reduce treatment delivery errors. in vivo measurements may be accomplished using entrance or exit detectors. The objective of this project is to investigate a novel entrance detector designed for in vivo dose verification. This thesis is separated into three main investigations, focusing on a prototype entrance transmission detector (TRD) developed by IBA Dosimetry, Germany. First contaminant electrons generated by the TRD in a 6 MV photon beam were investigated using Monte Carlo (MC) simulation. This study demonstrates that modification of the contaminant electron model in the treatment planning system is required for accurate patient dose calculation in buildup regions when using the device. Second, the ability of the TRD to accurately measure dose from IMRT and VMAT was investigated by characterising the spatial resolution of the device. This was accomplished by measuring the point spread function with further validation provided by MC simulation. Comparisons of measured and calculated doses show that the spatial resolution of the TRD allows for measurement of clinical IMRT fields within acceptable tolerance. Finally, a new general research tool was developed to perform MC simulations for VMAT and IMRT treatments, simultaneously tracking dose deposition in both the patient CT geometry and an arbitrary planar detector system, generalized to handle either entrance or exit orientations. It was demonstrated that the tool accurately simulates dose to the patient CT and planar detector geometries. The tool has been made freely available to the medical physics research community to help advance the development of in vivo planar detectors. In conclusion, this thesis presents several investigations that improve the understanding of a novel entrance detector designed for patient in vivo dosimetry.

  14. Simulation of geothermal water extraction in heterogeneous reservoirs using dynamic unstructured mesh optimisation

    NASA Astrophysics Data System (ADS)

    Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.

    2017-12-01

    It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required to reduce an error metric based on the Hessian of the field. This allows the local pressure drawdown to be captured without user¬ driven modification of the mesh. We demonstrate that the method has wide application in reservoir ¬scale models of geothermal fields, and regional models of groundwater resources.

  15. LBM-EP: Lattice-Boltzmann method for fast cardiac electrophysiology simulation from 3D images.

    PubMed

    Rapaka, S; Mansi, T; Georgescu, B; Pop, M; Wright, G A; Kamen, A; Comaniciu, Dorin

    2012-01-01

    Current treatments of heart rhythm troubles require careful planning and guidance for optimal outcomes. Computational models of cardiac electrophysiology are being proposed for therapy planning but current approaches are either too simplified or too computationally intensive for patient-specific simulations in clinical practice. This paper presents a novel approach, LBM-EP, to solve any type of mono-domain cardiac electrophysiology models at near real-time that is especially tailored for patient-specific simulations. The domain is discretized on a Cartesian grid with a level-set representation of patient's heart geometry, previously estimated from images automatically. The cell model is calculated node-wise, while the transmembrane potential is diffused using Lattice-Boltzmann method within the domain defined by the level-set. Experiments on synthetic cases, on a data set from CESC'10 and on one patient with myocardium scar showed that LBM-EP provides results comparable to an FEM implementation, while being 10 - 45 times faster. Fast, accurate, scalable and requiring no specific meshing, LBM-EP paves the way to efficient and detailed models of cardiac electrophysiology for therapy planning.

  16. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  17. Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.

    2015-01-01

    Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the isotropic turbulent flow decay, at a relatively high turbulent Mach number, show a nicely behaved spectral decay rate for medium to high wave numbers. The high-order CESE schemes offer very robust solutions even with the presence of strong shocks or widespread shocklets. The explicit formulation in conjunction with a close to unity theoretical upper Courant number bound has the potential to offer an efficient numerical framework for general compressible turbulent flow simulations with unstructured meshes.

  18. Modelling and Manufacturing of a 3D Printed Trachea for Cricothyroidotomy Simulation.

    PubMed

    Doucet, Gregory; Ryan, Stephen; Bartellas, Michael; Parsons, Michael; Dubrowski, Adam; Renouf, Tia

    2017-08-18

    Cricothyroidotomy is a life-saving medical procedure that allows for tracheal intubation. Most current cricothyroidotomy simulation models are either expensive or not anatomically accurate and provide the learner with an unrealistic simulation experience. The goal of this project is to improve current simulation techniques by utilizing rapid prototyping using 3D printing technology and expert opinions to develop inexpensive and anatomically accurate trachea simulators. In doing so, emergency cricothyroidotomy simulation can be made accessible, accurate, cost-effective and reproducible. Three-dimensional modelling software was used in conjunction with a desktop three-dimensional (3D) printer to design and manufacture an anatomically accurate model of the cartilage within the trachea (thyroid cartilage, cricoid cartilage, and the tracheal rings). The initial design was based on dimensions found in studies of tracheal anatomical configuration. This ensured that the landmarking necessary for emergency cricothyroidotomies was designed appropriately. Several revisions of the original model were made based on informal opinion from medical professionals to establish appropriate anatomical accuracy of the model for use in rural/remote cricothyroidotomy simulation. Using an entry-level desktop 3D printer, a low cost tracheal model was successfully designed that can be printed in less than three hours for only $1.70 Canadian dollars (CAD). Due to its anatomical accuracy, flexibility and durability, this model is great for use in emergency medicine simulation training. Additionally, the model can be assembled in conjunction with a membrane to simulate tracheal ligaments. Skin has been simulated as well to enhance the realism of the model. The result is an accurate simulation that will provide users with an anatomically correct model to practice important skills used in emergency airway surgery, specifically landmarking, incision and intubation. This design is a novel and easy to manufacture and reproduce, high fidelity trachea model that can be used by educators with limited resources.

  19. Modelling and Manufacturing of a 3D Printed Trachea for Cricothyroidotomy Simulation

    PubMed Central

    Ryan, Stephen; Bartellas, Michael; Parsons, Michael; Dubrowski, Adam; Renouf, Tia

    2017-01-01

    Cricothyroidotomy is a life-saving medical procedure that allows for tracheal intubation. Most current cricothyroidotomy simulation models are either expensive or not anatomically accurate and provide the learner with an unrealistic simulation experience. The goal of this project is to improve current simulation techniques by utilizing rapid prototyping using 3D printing technology and expert opinions to develop inexpensive and anatomically accurate trachea simulators. In doing so, emergency cricothyroidotomy simulation can be made accessible, accurate, cost-effective and reproducible. Three-dimensional modelling software was used in conjunction with a desktop three-dimensional (3D) printer to design and manufacture an anatomically accurate model of the cartilage within the trachea (thyroid cartilage, cricoid cartilage, and the tracheal rings). The initial design was based on dimensions found in studies of tracheal anatomical configuration. This ensured that the landmarking necessary for emergency cricothyroidotomies was designed appropriately. Several revisions of the original model were made based on informal opinion from medical professionals to establish appropriate anatomical accuracy of the model for use in rural/remote cricothyroidotomy simulation. Using an entry-level desktop 3D printer, a low cost tracheal model was successfully designed that can be printed in less than three hours for only $1.70 Canadian dollars (CAD). Due to its anatomical accuracy, flexibility and durability, this model is great for use in emergency medicine simulation training. Additionally, the model can be assembled in conjunction with a membrane to simulate tracheal ligaments. Skin has been simulated as well to enhance the realism of the model. The result is an accurate simulation that will provide users with an anatomically correct model to practice important skills used in emergency airway surgery, specifically landmarking, incision and intubation. This design is a novel and easy to manufacture and reproduce, high fidelity trachea model that can be used by educators with limited resources. PMID:29057187

  20. Lattice Boltzmann model for simulation of magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Chen, Shiyi; Chen, Hudong; Martinez, Daniel; Matthaeus, William

    1991-01-01

    A numerical method, based on a discrete Boltzmann equation, is presented for solving the equations of magnetohydrodynamics (MHD). The algorithm provides advantages similar to the cellular automaton method in that it is local and easily adapted to parallel computing environments. Because of much lower noise levels and less stringent requirements on lattice size, the method appears to be more competitive with traditional solution methods. Examples show that the model accurately reproduces both linear and nonlinear MHD phenomena.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Madison Theresa; Bates, Cameron Russell; Mckigney, Edward Allen

    Accurate detector modeling is a requirement to design systems in many non-proliferation scenarios; by determining a Detector’s Response Function (DRF) to incident radiation, it is possible characterize measurements of unknown sources. DRiFT is intended to post-process MCNP® output and create realistic detector spectra. Capabilities currently under development include the simulation of semiconductor, gas, and (as is discussed in this work) scintillator detector physics. Energy spectra and pulse shape discrimination (PSD) trends for incident photon and neutron radiation have been reproduced by DRiFT.

  2. Microlensing for extrasolar planets : improving the photometry

    NASA Astrophysics Data System (ADS)

    Bajek, David J.

    2013-08-01

    Gravitational Microlensing, as a technique for detecting Extrasolar Planets, is recognised for its potential in discovering small-mass planets similar to Earth, at a distance of a few Astronomical Units from their host stars. However, analysing the data from microlensing events (which statistically rarely reveal planets) is complex and requires continued and intensive use of various networks of telescopes working together in order to observe the phenomenon. As such the techniques are constantly being developed and refined; this project outlines some steps of the careful analysis required to model an event and ensure the best quality data is used in the fitting. A quantitative investigation into increasing the quality of the original photometric data available from any microlensing event demonstrates that 'lucky imaging' can lead to a marked improvement in the signal to noise ratio of images over standard imaging techniques, which could result in more accurate models and thus the calculation of more accurate planetary parameters. In addition, a simulation illustrating the effects of atmospheric turbulence on exposures was created, and expanded upon to give an approximation of the lucky imaging technique. This further demonstrated the advantages of lucky images which are shown to potentially approach the quality of those expected from diffraction limited photometry. The simulation may be further developed for potential future use as a 'theoretical lucky imager' in our research group, capable of producing and analysing synthetic exposures through customisable conditions.

  3. Optical eigenmodes for illumination & imaging

    NASA Astrophysics Data System (ADS)

    Kosmeier, Sebastian

    Gravitational Microlensing, as a technique for detecting Extrasolar Planets, is recognised for its potential in discovering small-mass planets similar to Earth, at a distance of a few Astronomical Units from their host stars. However, analysing the data from microlensing events (which statistically rarely reveal planets) is complex and requires continued and intensive use of various networks of telescopes working together in order to observe the phenomenon. As such the techniques are constantly being developed and refined; this project outlines some steps of the careful analysis required to model an event and ensure the best quality data is used in the fitting. A quantitative investigation into increasing the quality of the original photometric data available from any microlensing event demonstrates that 'lucky imaging' can lead to a marked improvement in the signal to noise ratio of images over standard imaging techniques, which could result in more accurate models and thus the calculation of more accurate planetary parameters. In addition, a simulation illustrating the effects of atmospheric turbulence on exposures was created, and expanded upon to give an approximation of the lucky imaging technique. This further demonstrated the advantages of lucky images which are shown to potentially approach the quality of those expected from diffraction limited photometry. The simulation may be further developed for potential future use as a 'theoretical lucky imager' in our research group, capable of producing and analysing synthetic exposures through customisable conditions.

  4. Direction-dependent Corrections in Polarimetric Radio Imaging. I. Characterizing the Effects of the Primary Beam on Full-Stokes Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jagannathan, P.; Bhatnagar, S.; Rau, U.

    Next generation radio telescope arrays are being designed and commissioned to accurately measure polarized intensity and rotation measures (RMs) across the entire sky through deep, wide-field radio interferometric surveys. Radio interferometer dish antenna arrays are affected by direction-dependent (DD) gains due to both instrumental and atmospheric effects. In this paper, we demonstrate the effect of DD errors of the parabolic dish antenna array on the measured polarized intensities of radio sources in interferometric images. We characterize the extent of polarimetric image degradation due to the DD gains through wide-band VLA simulations of representative point-source simulations of the radio sky atmore » L band (1–2 GHz). We show that at the 0.5 gain level of the primary beam there is significant flux leakage from Stokes I to Q , U amounting to 10% of the total intensity. We further demonstrate that while the instrumental response averages down for observations over large parallactic angle intervals, full-polarization DD correction is required to remove the effects of DD leakage. We also explore the effect of the DD beam on the RM signals and show that while the instrumental effect is primarily centered around 0 rad-m{sup −2}, the effect is significant over a broad range of RM requiring full polarization DD correction to accurately reconstruct the RM synthesis signal.« less

  5. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  6. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  7. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  8. SYSTID - A flexible tool for the analysis of communication systems.

    NASA Technical Reports Server (NTRS)

    Dawson, C. T.; Tranter, W. H.

    1972-01-01

    Description of the System Time Domain Simulation (SYSTID) computer-aided analysis program which is specifically structured for communication systems analysis. The SYSTID program is user oriented so that very little knowledge of computer techniques and very little programming ability are required for proper application. The program is designed so that the user can go from a system block diagram to an accurate simulation by simply programming a single English language statement for each block in the system. The mathematical and functional models available in the SYSTID library are presented. An example problem is given which illustrates the ease of modeling communication systems. Examples of the outputs available are presented, and proposed improvements are summarized.

  9. Numerical grid generation in computational field simulations. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soni, B.K.; Thompson, J.F.; Haeuser, J.

    1996-12-31

    To enhance the CFS technology to its next level of applicability (i.e., to create acceptance of CFS in an integrated product and process development involving multidisciplinary optimization) the basic requirements are: rapid turn-around time, reliable and accurate simulation, affordability and appropriate linkage to other engineering disciplines. In response to this demand, there has been a considerable growth in the grid generation related research activities involving automization, parallel processing, linkage with the CAD-CAM systems, CFS with dynamic motion and moving boundaries, strategies and algorithms associated with multi-block structured, unstructured, hybrid, hexahedral, and Cartesian grids, along with its applicability to various disciplinesmore » including biomedical, semiconductor, geophysical, ocean modeling, and multidisciplinary optimization.« less

  10. Development of High-Resolution UV-VIS Diagnostics for Space Plasma Simulation

    NASA Astrophysics Data System (ADS)

    Taylor, Andrew; Batishchev, Oleg

    2012-10-01

    Non-invasive far-UV-VIS plasma emission allows remote diagnostics of plasma, which is particularly important for space application. Accurate vacuum tank space plasma simulations require monochromators with high spectral resolution (better than 0.01A) to capture important details of atomic and ionic lines, such as Ly-alpha, etc. We are building a new system based on the previous work [1], and will discuss the development of a spectrometry system that combines a single-pass vacuum far-UV-NIR spectrometer and a tunable Fabry-Perot etalon. [4pt] [1] O. Batishchev and J.L. Cambier, Experimental Study of the Mini-Helicon Thruster, Air Force Research Laboratory Report, AFRL-RZ-ED-TR-2009-0020, 2009.

  11. Appropriate Mechanochemical Conditions for Corrosion-Fatigue Testing of Magnesium Alloys for Temporary Bioimplant Applications

    NASA Astrophysics Data System (ADS)

    Harandi, Shervin Eslami; Singh Raman, R. K.

    2015-05-01

    Magnesium (Mg) alloys possess great potential as bioimplants. A temporary implant employed as support for the repair of a fractured bone must possess sufficient strength to maintain their mechanical integrity for the required duration of healing. However, Mg alloys are susceptible to sudden cracking or fracture under the simultaneous action of cyclic loading and the corrosive physiological environment, i.e., corrosion fatigue (CF). Investigations of such fracture should be performed under appropriate mechanochemical conditions that appropriately simulate the actual human body conditions. This article reviews the existing knowledge on CF of Mg alloys in simulated body fluid and describes a relatively more accurate testing procedure developed in the authors' laboratory.

  12. Intra-arterial pressure measurement in neonates: dynamic response requirements.

    PubMed

    van Genderingen, H R; Gevers, M; Hack, W W

    1995-02-01

    A computer simulation of a catheter manometer system was used to quantify measurement errors in neonatal blood pressure parameters. Accurate intra-arterial pressure recordings of 21 critically ill newborns were fed into this simulated system. The dynamic characteristics, natural frequency and damping coefficient, were varied from 2.5 to 60 Hz and from 0.1 to 1.4, respectively. As a result, errors in systolic, diastolic and pulse arterial pressure were obtained as a function of natural frequency and damping coefficient. Iso-error curves for 2%, 5% and 10% were constructed. Using these curves, the maximum inaccuracy of any neonatal catheter manometer system can be determined and used in the clinical setting.

  13. Ice Accretion Modeling using an Eulerian Approach for Droplet Impingement

    NASA Technical Reports Server (NTRS)

    Kim, Joe Woong; Garza, Dennis P.; Sankar, Lakshmi N.; Kreeger, Richard E.

    2012-01-01

    A three-dimensional Eulerian analysis has been developed for modeling droplet impingement on lifting bodes. The Eulerian model solves the conservation equations of mass and momentum to obtain the droplet flow field properties on the same mesh used in CFD simulations. For complex configurations such as a full rotorcraft, the Eulerian approach is more efficient because the Lagrangian approach would require a significant amount of seeding for accurate estimates of collection efficiency. Simulations are done for various benchmark cases such as NACA0012 airfoil, MS317 airfoil and oscillating SC2110 airfoil to illustrate its use. The present results are compared with results from the Lagrangian approach used in an industry standard analysis called LEWICE.

  14. Numerical solution of boundary-integral equations for molecular electrostatics.

    PubMed

    Bardhan, Jaydeep P

    2009-03-07

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.

  15. A system to simulate and reproduce audio-visual environments for spatial hearing research.

    PubMed

    Seeber, Bernhard U; Kerber, Stefan; Hafter, Ervin R

    2010-02-01

    The article reports the experience gained from two implementations of the "Simulated Open-Field Environment" (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a "Swiss army knife" tool for auditory, spatial hearing and audio-visual research. Crown Copyright 2009. Published by Elsevier B.V. All rights reserved.

  16. A System to Simulate and Reproduce Audio-Visual Environments for Spatial Hearing Research

    PubMed Central

    Seeber, Bernhard U.; Kerber, Stefan; Hafter, Ervin R.

    2009-01-01

    The article reports the experience gained from two implementations of the “Simulated Open-Field Environment” (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a “Swiss army knife” tool for auditory, spatial hearing and audio-visual research. PMID:19909802

  17. AccuRT: A versatile tool for radiative transfer simulations in the coupled atmosphere-ocean system

    NASA Astrophysics Data System (ADS)

    Hamre, Børge; Stamnes, Snorre; Stamnes, Knut; Stamnes, Jakob

    2017-02-01

    Reliable, accurate, and efficient modeling of the transport of electromagnetic radiation in turbid media has important applications in the study of the Earth's climate by remote sensing. For example, such modeling is needed to develop forward-inverse methods used to quantify types and concentrations of aerosol and cloud particles in the atmosphere, the dissolved organic and particulate biogeochemical matter in lakes, rivers, coastal, and open-ocean waters. It is also needed to simulate the performance of remote sensing detectors deployed on aircraft, balloons, and satellites as well as radiometric detectors deployed on buoys, gliders and other aquatic observing systems. Accurate radiative transfer modeling is also required to compute irradiances and scalar irradiances that are used to compute warming/cooling and photolysis rates in the atmosphere and primary production and warming/cooling rates in the water column. AccuRT is a radiative transfer model for the coupled atmosphere-water system that is designed to be a versatile tool for researchers in the ocean optics and remote sensing communities. It addresses the needs of researchers interested in analyzing irradiance and radiance measurements in the field and laboratory as well as those interested in making simulations of the top-of-the-atmosphere radiance in support of remote sensing algorithm development.

  18. High-fidelity real-time maritime scene rendering

    NASA Astrophysics Data System (ADS)

    Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin

    2011-06-01

    The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.

  19. Hip dysplasia, pelvic obliquity, and scoliosis in cerebral palsy: a qualitative analysis using 3D CT reconstruction

    NASA Astrophysics Data System (ADS)

    Russ, Mark D.; Abel, Mark F.

    1998-06-01

    Five patients with cerebral palsy, hip dysplasia, pelvic obliquity, and scoliosis were evaluated retrospectively using three dimensional computed tomography (3DCT) scans of the proximal femur, pelvis, and lumbar spine to qualitatively evaluate their individual deformities by measuring a number of anatomical landmarks. Three dimensional reconstructions of the data were visualized, analyzed, and then manipulated interactively to perform simulated osteotomies of the proximal femur and pelvis to achieve surgical correction of the hip dysplasia. Severe deformity can occur in spastic cerebral palsy, with serious consequences for the quality of life of the affected individuals and their families. Controversy exists regarding the type, timing and efficacy of surgical intervention for correction of hip dysplasia in this population. Other authors have suggested 3DCT studies are required to accurately analyze acetabular deficiency, and that this data allows for more accurate planning of reconstructive surgery. It is suggested here that interactive manipulation of the data to simulate the proposed surgery is a clinically useful extension of the analysis process and should also be considered as an essential part of the pre-operative planning to assure that the appropriate procedure is chosen. The surgical simulation may reduce operative time and improve surgical correction of the deformity.

  20. Ultra-dense magnetoresistive mass memory

    NASA Technical Reports Server (NTRS)

    Daughton, J. M.; Sinclair, R.; Dupuis, T.; Brown, J.

    1992-01-01

    This report details the progress and accomplishments of Nonvolatile Electronics (NVE), Inc., on the design of the wafer scale MRAM mass memory system during the fifth quarter of the project. NVE has made significant progress this quarter on the one megabit design in several different areas. A test chip, which will verify a working GMR bit with the dimensions required by the 1 Meg chip, has been designed, laid out, and is currently being processed in the NVE labs. This test chip will allow electrical specifications, tolerances, and processing issues to be finalized before construction of the actual chip, thus providing a greater assurance of success of the final 1 Meg design. A model has been developed to accurately simulate the parasitic effects of unselected sense lines. This model gives NVE the ability to perform accurate simulations of the array electronic and test different design concepts. Much of the circuit design for the 1 Meg chip has been completed and simulated and these designs are included. Progress has been made in the wafer scale design area to verify the reliable operation of the 16 K macrocell. This is currently being accomplished with the design and construction of two stand alone test systems which will perform life tests and gather data on reliabiliy and wearout mechanisms for analysis.

  1. Surfactant enhanced recovery of tetrachloroethylene from a porous medium containing low permeability lenses. 2. Numerical simulation.

    PubMed

    Rathfelder, K M; Abriola, L M; Taylor, T P; Pennell, K D

    2001-04-01

    A numerical model of surfactant enhanced solubilization was developed and applied to the simulation of nonaqueous phase liquid recovery in two-dimensional heterogeneous laboratory sand tank systems. Model parameters were derived from independent, small-scale, batch and column experiments. These parameters included viscosity, density, solubilization capacity, surfactant sorption, interfacial tension, permeability, capillary retention functions, and interphase mass transfer correlations. Model predictive capability was assessed for the evaluation of the micellar solubilization of tetrachloroethylene (PCE) in the two-dimensional systems. Predicted effluent concentrations and mass recovery agreed reasonably well with measured values. Accurate prediction of enhanced solubilization behavior in the sand tanks was found to require the incorporation of pore-scale, system-dependent, interphase mass transfer limitations, including an explicit representation of specific interfacial contact area. Predicted effluent concentrations and mass recovery were also found to depend strongly upon the initial NAPL entrapment configuration. Numerical results collectively indicate that enhanced solubilization processes in heterogeneous, laboratory sand tank systems can be successfully simulated using independently measured soil parameters and column-measured mass transfer coefficients, provided that permeability and NAPL distributions are accurately known. This implies that the accuracy of model predictions at the field scale will be constrained by our ability to quantify soil heterogeneity and NAPL distribution.

  2. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.

    PubMed

    Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe

    2015-08-07

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  3. Performance evaluation using SYSTID time domain simulation. [computer-aid design and analysis for communication systems

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Ziemer, R. E.; Fashano, M. J.

    1975-01-01

    This paper reviews the SYSTID technique for performance evaluation of communication systems using time-domain computer simulation. An example program illustrates the language. The inclusion of both Gaussian and impulse noise models make accurate simulation possible in a wide variety of environments. A very flexible postprocessor makes possible accurate and efficient performance evaluation.

  4. Implementation of quantum key distribution network simulation module in the network simulator NS-3

    NASA Astrophysics Data System (ADS)

    Mehic, Miralem; Maurhart, Oliver; Rass, Stefan; Voznak, Miroslav

    2017-10-01

    As the research in quantum key distribution (QKD) technology grows larger and becomes more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. Due to the specificity of the QKD link which requires optical and Internet connection between the network nodes, to deploy a complete testbed containing multiple network hosts and links to validate and verify a certain network algorithm or protocol would be very costly. Network simulators in these circumstances save vast amounts of money and time in accomplishing such a task. The simulation environment offers the creation of complex network topologies, a high degree of control and repeatable experiments, which in turn allows researchers to conduct experiments and confirm their results. In this paper, we described the design of the QKD network simulation module which was developed in the network simulator of version 3 (NS-3). The module supports simulation of the QKD network in an overlay mode or in a single TCP/IP mode. Therefore, it can be used to simulate other network technologies regardless of QKD.

  5. Wave propagation in equivalent continuums representing truss lattice materials

    DOE PAGES

    Messner, Mark C.; Barham, Matthew I.; Kumar, Mukul; ...

    2015-07-29

    Stiffness scales linearly with density in stretch-dominated lattice meta-materials offering the possibility of very light yet very stiff structures. Current additive manufacturing techniques can assemble structures from lattice materials, but the design of such structures will require accurate, efficient simulation methods. Equivalent continuum models have several advantages over discrete truss models of stretch dominated lattices, including computational efficiency and ease of model construction. However, the development an equivalent model suitable for representing the dynamic response of a periodic truss in the small deformation regime is complicated by microinertial effects. This study derives a dynamic equivalent continuum model for periodic trussmore » structures suitable for representing long-wavelength wave propagation and verifies it against the full Bloch wave theory and detailed finite element simulations. The model must incorporate microinertial effects to accurately reproduce long wavelength characteristics of the response such as anisotropic elastic soundspeeds. Finally, the formulation presented here also improves upon previous work by preserving equilibrium at truss joints for simple lattices and by improving numerical stability by eliminating vertices in the effective yield surface.« less

  6. Utilizing Direct Numerical Simulations of Transition and Turbulence in Design Optimization

    NASA Technical Reports Server (NTRS)

    Rai, Man M.

    2015-01-01

    Design optimization methods that use the Reynolds-averaged Navier-Stokes equations with the associated turbulence and transition models, or other model-based forms of the governing equations, may result in aerodynamic designs with actual performance levels that are noticeably different from the expected values because of the complexity of modeling turbulence/transition accurately in certain flows. Flow phenomena such as wake-blade interaction and trailing edge vortex shedding in turbines and compressors (examples of such flows) may require a computational approach that is free of transition/turbulence models, such as direct numerical simulations (DNS), for the underlying physics to be computed accurately. Here we explore the possibility of utilizing DNS data in designing a turbine blade section. The ultimate objective is to substantially reduce differences between predicted performance metrics and those obtained in reality. The redesign of a typical low-pressure turbine blade section with the goal of reducing total pressure loss in the row is provided as an example. The basic ideas presented here are of course just as applicable elsewhere in aerodynamic shape optimization as long as the computational costs are not excessive.

  7. Characterization of Sound Radiation by Unresolved Scales of Motion in Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Zhou, Ye

    1999-01-01

    Evaluation of the sound sources in a high Reynolds number turbulent flow requires time-accurate resolution of an extremely large number of scales of motion. Direct numerical simulations will therefore remain infeasible for the forseeable future: although current large eddy simulation methods can resolve the largest scales of motion accurately the, they must leave some scales of motion unresolved. A priori studies show that acoustic power can be underestimated significantly if the contribution of these unresolved scales is simply neglected. In this paper, the problem of evaluating the sound radiation properties of the unresolved, subgrid-scale motions is approached in the spirit of the simplest subgrid stress models: the unresolved velocity field is treated as isotropic turbulence with statistical descriptors, evaluated from the resolved field. The theory of isotropic turbulence is applied to derive formulas for the total power and the power spectral density of the sound radiated by a filtered velocity field. These quantities are compared with the corresponding quantities for the unfiltered field for a range of filter widths and Reynolds numbers.

  8. Star tracking method based on multiexposure imaging for intensified star trackers.

    PubMed

    Yu, Wenbo; Jiang, Jie; Zhang, Guangjun

    2017-07-20

    The requirements for the dynamic performance of star trackers are rapidly increasing with the development of space exploration technologies. However, insufficient knowledge of the angular acceleration has largely decreased the performance of the existing star tracking methods, and star trackers may even fail to track under highly dynamic conditions. This study proposes a star tracking method based on multiexposure imaging for intensified star trackers. The accurate estimation model of the complete motion parameters, including the angular velocity and angular acceleration, is established according to the working characteristic of multiexposure imaging. The estimation of the complete motion parameters is utilized to generate the predictive star image accurately. Therefore, the correct matching and tracking between stars in the real and predictive star images can be reliably accomplished under highly dynamic conditions. Simulations with specific dynamic conditions are conducted to verify the feasibility and effectiveness of the proposed method. Experiments with real starry night sky observation are also conducted for further verification. Simulations and experiments demonstrate that the proposed method is effective and shows excellent performance under highly dynamic conditions.

  9. Field measurement of moisture-buffering model inputs for residential buildings

    DOE PAGES

    Woods, Jason; Winkler, Jon

    2016-02-05

    Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less

  10. A Smoluchowski model of crystallization dynamics of small colloidal clusters

    NASA Astrophysics Data System (ADS)

    Beltran-Villegas, Daniel J.; Sehgal, Ray M.; Maroudas, Dimitrios; Ford, David M.; Bevan, Michael A.

    2011-10-01

    We investigate the dynamics of colloidal crystallization in a 32-particle system at a fixed value of interparticle depletion attraction that produces coexisting fluid and solid phases. Free energy landscapes (FELs) and diffusivity landscapes (DLs) are obtained as coefficients of 1D Smoluchowski equations using as order parameters either the radius of gyration or the average crystallinity. FELs and DLs are estimated by fitting the Smoluchowski equations to Brownian dynamics (BD) simulations using either linear fits to locally initiated trajectories or global fits to unbiased trajectories using Bayesian inference. The resulting FELs are compared to Monte Carlo Umbrella Sampling results. The accuracy of the FELs and DLs for modeling colloidal crystallization dynamics is evaluated by comparing mean first-passage times from BD simulations with analytical predictions using the FEL and DL models. While the 1D models accurately capture dynamics near the free energy minimum fluid and crystal configurations, predictions near the transition region are not quantitatively accurate. A preliminary investigation of ensemble averaged 2D order parameter trajectories suggests that 2D models are required to capture crystallization dynamics in the transition region.

  11. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  12. DoctorEye: A clinically driven multifunctional platform, for accurate processing of tumors in medical images.

    PubMed

    Skounakis, Emmanouil; Farmaki, Christina; Sakkalis, Vangelis; Roniotis, Alexandros; Banitsas, Konstantinos; Graf, Norbert; Marias, Konstantinos

    2010-01-01

    This paper presents a novel, open access interactive platform for 3D medical image analysis, simulation and visualization, focusing in oncology images. The platform was developed through constant interaction and feedback from expert clinicians integrating a thorough analysis of their requirements while having an ultimate goal of assisting in accurately delineating tumors. It allows clinicians not only to work with a large number of 3D tomographic datasets but also to efficiently annotate multiple regions of interest in the same session. Manual and semi-automatic segmentation techniques combined with integrated correction tools assist in the quick and refined delineation of tumors while different users can add different components related to oncology such as tumor growth and simulation algorithms for improving therapy planning. The platform has been tested by different users and over large number of heterogeneous tomographic datasets to ensure stability, usability, extensibility and robustness with promising results. the platform, a manual and tutorial videos are available at: http://biomodeling.ics.forth.gr. it is free to use under the GNU General Public License.

  13. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  14. A comparative study between experimental results and numerical predictions of multi-wall structural response to hypervelocity impact

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.; Peck, Jeffrey A.

    1992-01-01

    Over the last three decades, multiwall structures have been analyzed extensively, primarily through experiment, as a means of increasing the protection afforded to spacecraft structure. However, as structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under impact loading conditions. This paper presents the results of a preliminary numerical/experimental investigation of the hypervelocity impact response of multiwall structures. The results of experimental high-speed impact tests are compared against the predictions of the HULL hydrodynamic computer code. It is shown that the hypervelocity impact response characteristics of a specific system cannot be accurately predicted from a limited number of HULL code impact simulations. However, if a wide range of impact loadings conditions are considered, then the ballistic limit curve of the system based on the entire series of numerical simulations can be used as a relatively accurate indication of actual system response.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Jason; Winkler, Jon

    Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less

  16. Combination of ray-tracing and the method of moments for electromagnetic radiation analysis using reduced meshes

    NASA Astrophysics Data System (ADS)

    Delgado, Carlos; Cátedra, Manuel Felipe

    2018-05-01

    This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.

  17. High precision active nutation control for a flexible momentum biased spacecraft

    NASA Technical Reports Server (NTRS)

    Laskin, R. A.; Kopf, E. H.

    1984-01-01

    The controller design for the Solar Dynamics Observatory (SDO) is presented. SDO is a momentum biased spacecraft with three flexible appendages. Its primary scientific instrument, the solar oscillations imager (SOI), is rigidly attached to the spacecraft bus and has arc-second pointing requirements. Meeting these requirements necessitates the use of an active nutation controller (ANC) which is here mechanized with a small reaction wheel oriented along a bus transverse axis. The ANC does its job by orchestrating the transfer of angular momentum out of the bus transverse axes and into the momentum wheel. A simulation study verifies that the controller provides quick, stable, and accurate response.

  18. A physical-based gas-surface interaction model for rarefied gas flow simulation

    NASA Astrophysics Data System (ADS)

    Liang, Tengfei; Li, Qi; Ye, Wenjing

    2018-01-01

    Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.

  19. Patient-specific coronary artery blood flow simulation using myocardial volume partitioning

    NASA Astrophysics Data System (ADS)

    Kim, Kyung Hwan; Kang, Dongwoo; Kang, Nahyup; Kim, Ji-Yeon; Lee, Hyong-Euk; Kim, James D. K.

    2013-03-01

    Using computational simulation, we can analyze cardiovascular disease in non-invasive and quantitative manners. More specifically, computational modeling and simulation technology has enabled us to analyze functional aspect such as blood flow, as well as anatomical aspect such as stenosis, from medical images without invasive measurements. Note that the simplest ways to perform blood flow simulation is to apply patient-specific coronary anatomy with other average-valued properties; in this case, however, such conditions cannot fully reflect accurate physiological properties of patients. To resolve this limitation, we present a new patient-specific coronary blood flow simulation method by myocardial volume partitioning considering artery/myocardium structural correspondence. We focus on that blood supply is closely related to the mass of each myocardial segment corresponding to the artery. Therefore, we applied this concept for setting-up simulation conditions in the way to consider many patient-specific features as possible from medical image: First, we segmented coronary arteries and myocardium separately from cardiac CT; then the myocardium is partitioned into multiple regions based on coronary vasculature. The myocardial mass and required blood mass for each artery are estimated by converting myocardial volume fraction. Finally, the required blood mass is used as boundary conditions for each artery outlet, with given average aortic blood flow rate and pressure. To show effectiveness of the proposed method, fractional flow reserve (FFR) by simulation using CT image has been compared with invasive FFR measurement of real patient data, and as a result, 77% of accuracy has been obtained.

  20. Design and evaluation of a DAMQ multiprocessor network with self-compacting buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.; O`Krafka, B.W.O.; Vassiliadis, S.

    1994-12-31

    This paper describes a new approach to implement Dynamically Allocated Multi-Queue (DAMQ) switching elements using a technique called ``self-compacting buffers``. This technique is efficient in that the amount of hardware required to manage the buffers is relatively small; it offers high performance since it is an implementation of a DAMQ. The first part of this paper describes the self-compacting buffer architecture in detail, and compares it against a competing DAMQ switch design. The second part presents extensive simulation results comparing the performance of a self compacting buffer switch against an ideal switch including several examples of k-ary n-cubes and deltamore » networks. In addition, simulation results show how the performance of an entire network can be quickly and accurately approximated by simulating just a single switching element.« less

  1. Integration of multiple theories for the simulation of laser interference lithography processes

    NASA Astrophysics Data System (ADS)

    Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung

    2017-11-01

    The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.

  2. Predicting drug hydrolysis based on moisture uptake in various packaging designs.

    PubMed

    Naversnik, Klemen; Bohanec, Simona

    2008-12-18

    An attempt was made to predict the stability of a moisture sensitive drug product based on the knowledge of the dependence of the degradation rate on tablet moisture. The moisture increase inside a HDPE bottle with the drug formulation was simulated with the sorption-desorption moisture transfer model, which, in turn, allowed an accurate prediction of the drug degradation kinetics. The stability prediction, obtained by computer simulation, was made in a considerably shorter time frame and required little resources compared to a conventional stability study. The prediction was finally upgraded to a stochastic Monte Carlo simulation, which allowed quantitative incorporation of uncertainty, stemming from various sources. The resulting distribution of the outcome of interest (amount of degradation product at expiry) is a comprehensive way of communicating the result along with its uncertainty, superior to single-value results or confidence intervals.

  3. Integration of multiple theories for the simulation of laser interference lithography processes.

    PubMed

    Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung

    2017-11-24

    The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.

  4. Simulation study of a new inverse-pinch high Coulomb transfer switch

    NASA Technical Reports Server (NTRS)

    Choi, S. H.

    1984-01-01

    A simulation study of a simplified model of a high coulomb transfer switch is performed. The switch operates in an inverse pinch geometry formed by an all metal chamber, which greatly reduces hot spot formations on the electrode surfaces. Advantages of the switch over the conventional switches are longer useful life, higher current capability and lower inductance, which improves the characteristics required for a high repetition rate switch. The simulation determines the design parameters by analytical computations and comparison with the experimentally measured risetime, current handling capability, electrode damage, and hold-off voltages. The parameters of initial switch design can be determined for the anticipated switch performance. Results are in agreement with the experiment results. Although the model is simplified, the switch characteristics such as risetime, current handling capability, electrode damages, and hold-off voltages are accurately determined.

  5. Constructing simple yet accurate potentials for describing the solvation of HCl/water clusters in bulk helium and nanodroplets.

    PubMed

    Boese, A Daniel; Forbert, Harald; Masia, Marco; Tekin, Adem; Marx, Dominik; Jansen, Georg

    2011-08-28

    The infrared spectroscopy of molecules, complexes, and molecular aggregates dissolved in superfluid helium clusters, commonly called HElium NanoDroplet Isolation (HENDI) spectroscopy, is an established, powerful experimental technique for extracting high resolution ro-vibrational spectra at ultra-low temperatures. Realistic quantum simulations of such systems, in particular in cases where the solute is undergoing a chemical reaction, require accurate solute-helium potentials which are also simple enough to be efficiently evaluated over the vast number of steps required in typical Monte Carlo or molecular dynamics sampling. This precludes using global potential energy surfaces as often parameterized for small complexes in the realm of high-resolution spectroscopic investigations that, in view of the computational effort imposed, are focused on the intermolecular interaction of rigid molecules with helium. Simple Lennard-Jones-like pair potentials, on the other hand, fall short in providing the required flexibility and accuracy in order to account for chemical reactions of the solute molecule. Here, a general scheme of constructing sufficiently accurate site-site potentials for use in typical quantum simulations is presented. This scheme employs atom-based grids, accounts for local and global minima, and is applied to the special case of a HCl(H(2)O)(4) cluster solvated by helium. As a first step, accurate interaction energies of a helium atom with a set of representative configurations sampled from a trajectory following the dissociation of the HCl(H(2)O)(4) cluster were computed using an efficient combination of density functional theory and symmetry-adapted perturbation theory, i.e. the DFT-SAPT approach. For each of the sampled cluster configurations, a helium atom was placed at several hundred positions distributed in space, leading to an overall number of about 400,000 such quantum chemical calculations. The resulting total interaction energies, decomposed into several energetic contributions, served to fit a site-site potential, where the sites are located at the atomic positions and, additionally, pseudo-sites are distributed along the lines joining pairs of atom sites within the molecular cluster. This approach ensures that this solute-helium potential is able to describe both undissociated molecular and dissociated (zwitter-) ionic configurations, as well as the interconnecting reaction pathway without re-adjusting partial charges or other parameters depending on the particular configuration. Test calculations of the larger HCl(H(2)O)(5) cluster interacting with helium demonstrate the transferability of the derived site-site potential. This specific potential can be readily used in quantum simulations of such HCl/water clusters in bulk helium or helium nanodroplets, whereas the underlying construction procedure can be generalized to other molecular solutes in other atomic solvents such as those encountered in rare gas matrix isolation spectroscopy.

  6. Velocity field calculation for non-orthogonal numerical grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G. P.

    2015-03-01

    Computational grids containing cell faces that do not align with an orthogonal (e.g. Cartesian, cylindrical) coordinate system are routinely encountered in porous-medium numerical simulations. Such grids are referred to in this study as non-orthogonal grids because some cell faces are not orthogonal to a coordinate system plane (e.g. xy, yz or xz plane in Cartesian coordinates). Non-orthogonal grids are routinely encountered at the Savannah River Site in porous-medium flow simulations for Performance Assessments and groundwater flow modeling. Examples include grid lines that conform to the sloping roof of a waste tank or disposal unit in a 2D Performance Assessment simulation,more » and grid surfaces that conform to undulating stratigraphic surfaces in a 3D groundwater flow model. Particle tracking is routinely performed after a porous-medium numerical flow simulation to better understand the dynamics of the flow field and/or as an approximate indication of the trajectory and timing of advective solute transport. Particle tracks are computed by integrating the velocity field from cell to cell starting from designated seed (starting) positions. An accurate velocity field is required to attain accurate particle tracks. However, many numerical simulation codes report only the volumetric flowrate (e.g. PORFLOW) and/or flux (flowrate divided by area) crossing cell faces. For an orthogonal grid, the normal flux at a cell face is a component of the Darcy velocity vector in the coordinate system, and the pore velocity for particle tracking is attained by dividing by water content. For a non-orthogonal grid, the flux normal to a cell face that lies outside a coordinate plane is not a true component of velocity with respect to the coordinate system. Nonetheless, normal fluxes are often taken as Darcy velocity components, either naively or with accepted approximation. To enable accurate particle tracking or otherwise present an accurate depiction of the velocity field for a non-orthogonal grid, Darcy velocity components are rigorously derived in this study from normal fluxes to cell faces, which are assumed to be provided by or readily computed from porous-medium simulation code output. The normal fluxes are presumed to satisfy mass balances for every computational cell, and if so, the derived velocity fields are consistent with these mass balances. Derivations are provided for general two-dimensional quadrilateral and three-dimensional hexagonal systems, and for the commonly encountered special cases of perfectly vertical side faces in 2D and 3D and a rectangular footprint in 3D.« less

  7. The Aliso Canyon Natural Gas Leak : Large Eddy Simulations for Modeling Atmospheric Dynamics and Interpretation of Observations.

    NASA Astrophysics Data System (ADS)

    Prasad, K.; Thorpe, A. K.; Duren, R. M.; Thompson, D. R.; Whetstone, J. R.

    2016-12-01

    The National Institute of Standards and Technology (NIST) has supported the development and demonstration of a measurement capability to accurately locate greenhouse gas sources and measure their flux to the atmosphere over urban domains. However, uncertainties in transport models which form the basis of all top-down approaches can significantly affect our capability to attribute sources and predict their flux to the atmosphere. Reducing uncertainties between bottom-up and top-down models will require high resolution transport models as well as validation and verification of dispersion models over an urban domain. Tracer experiments involving the release of Perfluorocarbon Tracers (PFTs) at known flow rates offer the best approach for validating dispersion / transport models. However, tracer experiments are limited by cost, ability to make continuous measurements, and environmental concerns. Natural tracer experiments, such as the leak from the Aliso Canyon underground storage facility offers a unique opportunity to improve and validate high resolution transport models, test leak hypothesis, and to estimate the amount of methane released.High spatial resolution (10 m) Large Eddy Simulations (LES) coupled with WRF atmospheric transport models were performed to simulate the dynamics of the Aliso Canyon methane plume and to quantify the source. High resolution forward simulation results were combined with aircraft and tower based in-situ measurements as well as data from NASA airborne imaging spectrometers. Comparison of simulation results with measurement data demonstrate the capability of the LES models to accurately model transport and dispersion of methane plumes over urban domains.

  8. Experimental and Numerical Examination of the Thermal Transmittance of High Performance Window Frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustavsen Ph.D., Arild; Goudey, Howdy; Kohler, Christian

    2010-06-17

    While window frames typically represent 20-30percent of the overall window area, their impact on the total window heat transfer rates may be much larger. This effect is even greater in low-conductance (highly insulating) windows which incorporate very low conductance glazings. Developing low-conductance window frames requires accurate simulation tools for product research and development. The Passivhaus Institute in Germany states that windows (glazing and frames, combined) should have U-values not exceeding 0.80 W/(m??K). This has created a niche market for highly insulating frames, with frame U-values typically around 0.7-1.0 W/(m2 cdot K). The U-values reported are often based on numerical simulationsmore » according to international simulation standards. It is prudent to check the accuracy of these calculation standards, especially for high performance products before more manufacturers begin to use them to improve other product offerings. In this paper the thermal transmittance of five highly insulating window frames (three wooden frames, one aluminum frame and one PVC frame), found from numerical simulations and experiments, are compared. Hot box calorimeter results are compared with numerical simulations according to ISO 10077-2 and ISO 15099. In addition CFD simulations have been carried out, in order to use the most accurate tool available to investigate the convection and radiation effects inside the frame cavities. Our results show that available tools commonly used to evaluate window performance, based on ISO standards, give good overall agreement, but specific areas need improvement.« less

  9. Dosimetry in small-animal CT using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lee, C.-L.; Park, S.-J.; Jeon, P.-H.; Jo, B.-D.; Kim, H.-J.

    2016-01-01

    Small-animal computed tomography (micro-CT) imaging devices are increasingly being used in biological research. While investigators are mainly interested in high-contrast, low-noise, and high-resolution anatomical images, relatively large radiation doses are required, and there is also growing concern over the radiological risk from preclinical experiments. This study was conducted to determine the radiation dose in a mouse model for dosimetric estimates using the GEANT4 application for tomographic emission simulations (GATE) and to extend its techniques to various small-animal CT applications. Radiation dose simulations were performed with the same parameters as those for the measured micro-CT data, using the MOBY phantom, a pencil ion chamber and an electrometer with a CT detector. For physical validation of radiation dose, absorbed dose of brain and liver in mouse were evaluated to compare simulated results with physically measured data using thermoluminescent dosimeters (TLDs). The mean difference between simulated and measured data was less than 2.9% at 50 kVp X-ray source. The absorbed doses of 37 brain tissues and major organs of the mouse were evaluated according to kVp changes. The absorbed dose over all of the measurements in the brain (37 types of tissues) consistently increased and ranged from 42.4 to 104.0 mGy. Among the brain tissues, the absorbed dose of the hypothalamus (157.8-414.30 mGy) was the highest for the beams at 50-80 kVp, and that of the corpus callosum (11.2-26.6 mGy) was the lowest. These results can be used as a dosimetric database to control mouse doses and preclinical targeted radiotherapy experiments. In addition, to accurately calculate the mouse-absorbed dose, the X-ray spectrum, detector alignment, and uncertainty in the elemental composition of the simulated materials must be accurately modeled.

  10. Coarse-grained versus atomistic simulations: realistic interaction free energies for real proteins.

    PubMed

    May, Ali; Pool, René; van Dijk, Erik; Bijlard, Jochem; Abeln, Sanne; Heringa, Jaap; Feenstra, K Anton

    2014-02-01

    To assess whether two proteins will interact under physiological conditions, information on the interaction free energy is needed. Statistical learning techniques and docking methods for predicting protein-protein interactions cannot quantitatively estimate binding free energies. Full atomistic molecular simulation methods do have this potential, but are completely unfeasible for large-scale applications in terms of computational cost required. Here we investigate whether applying coarse-grained (CG) molecular dynamics simulations is a viable alternative for complexes of known structure. We calculate the free energy barrier with respect to the bound state based on molecular dynamics simulations using both a full atomistic and a CG force field for the TCR-pMHC complex and the MP1-p14 scaffolding complex. We find that the free energy barriers from the CG simulations are of similar accuracy as those from the full atomistic ones, while achieving a speedup of >500-fold. We also observe that extensive sampling is extremely important to obtain accurate free energy barriers, which is only within reach for the CG models. Finally, we show that the CG model preserves biological relevance of the interactions: (i) we observe a strong correlation between evolutionary likelihood of mutations and the impact on the free energy barrier with respect to the bound state; and (ii) we confirm the dominant role of the interface core in these interactions. Therefore, our results suggest that CG molecular simulations can realistically be used for the accurate prediction of protein-protein interaction strength. The python analysis framework and data files are available for download at http://www.ibi.vu.nl/downloads/bioinformatics-2013-btt675.tgz.

  11. Simulation and experimental verification of prompt gamma-ray emissions during proton irradiation.

    PubMed

    Schumann, A; Petzoldt, J; Dendooven, P; Enghardt, W; Golnik, C; Hueso-González, F; Kormoll, T; Pausch, G; Roemer, K; Fiedler, F

    2015-05-21

    Irradiation with protons and light ions offers new possibilities for tumor therapy but has a strong need for novel imaging modalities for treatment verification. The development of new detector systems, which can provide an in vivo range assessment or dosimetry, requires an accurate knowledge of the secondary radiation field and reliable Monte Carlo simulations. This paper presents multiple measurements to characterize the prompt γ-ray emissions during proton irradiation and benchmarks the latest Geant4 code against the experimental findings. Within the scope of this work, the total photon yield for different target materials, the energy spectra as well as the γ-ray depth profile were assessed. Experiments were performed at the superconducting AGOR cyclotron at KVI-CART, University of Groningen. Properties of the γ-ray emissions were experimentally determined. The prompt γ-ray emissions were measured utilizing a conventional HPGe detector system (Clover) and quantitatively compared to simulations. With the selected physics list QGSP_BIC_HP, Geant4 strongly overestimates the photon yield in most cases, sometimes up to 50%. The shape of the spectrum and qualitative occurrence of discrete γ lines is reproduced accurately. A sliced phantom was designed to determine the depth profile of the photons. The position of the distal fall-off in the simulations agrees with the measurements, albeit the peak height is also overestimated. Hence, Geant4 simulations of prompt γ-ray emissions from irradiation with protons are currently far less reliable as compared to simulations of the electromagnetic processes. Deviations from experimental findings were observed and quantified. Although there has been a constant improvement of Geant4 in the hadronic sector, there is still a gap to close.

  12. Comparing Molecular Dynamics Force Fields in the Essential Subspace

    PubMed Central

    Gomez-Puertas, Paulino; Boomsma, Wouter; Lindorff-Larsen, Kresten

    2015-01-01

    The continued development and utility of molecular dynamics simulations requires improvements in both the physical models used (force fields) and in our ability to sample the Boltzmann distribution of these models. Recent developments in both areas have made available multi-microsecond simulations of two proteins, ubiquitin and Protein G, using a number of different force fields. Although these force fields mostly share a common mathematical form, they differ in their parameters and in the philosophy by which these were derived, and previous analyses showed varying levels of agreement with experimental NMR data. To complement the comparison to experiments, we have performed a structural analysis of and comparison between these simulations, thereby providing insight into the relationship between force-field parameterization, the resulting ensemble of conformations and the agreement with experiments. In particular, our results show that, at a coarse level, many of the motional properties are preserved across several, though not all, force fields. At a finer level of detail, however, there are distinct differences in both the structure and dynamics of the two proteins, which can, together with comparison with experimental data, help to select force fields for simulations of proteins. A noteworthy observation is that force fields that have been reparameterized and improved to provide a more accurate energetic description of the balance between helical and coil structures are difficult to distinguish from their “unbalanced” counterparts in these simulations. This observation implies that simulations of stable, folded proteins, even those reaching 10 microseconds in length, may provide relatively little information that can be used to modify torsion parameters to achieve an accurate balance between different secondary structural elements. PMID:25811178

  13. Unfitted Two-Phase Flow Simulations in Pore-Geometries with Accurate

    NASA Astrophysics Data System (ADS)

    Heimann, Felix; Engwer, Christian; Ippisch, Olaf; Bastian, Peter

    2013-04-01

    The development of better macro scale models for multi-phase flow in porous media is still impeded by the lack of suitable methods for the simulation of such flow regimes on the pore scale. The highly complicated geometry of natural porous media imposes requirements with regard to stability and computational efficiency which current numerical methods fail to meet. Therefore, current simulation environments are still unable to provide a thorough understanding of porous media in multi-phase regimes and still fail to reproduce well known effects like hysteresis or the more peculiar dynamics of the capillary fringe with satisfying accuracy. Although flow simulations in pore geometries were initially the domain of Lattice-Boltzmann and other particle methods, the development of Galerkin methods for such applications is important as they complement the range of feasible flow and parameter regimes. In the recent past, it has been shown that unfitted Galerkin methods can be applied efficiently to topologically demanding geometries. However, in the context of two-phase flows, the interface of the two immiscible fluids effectively separates the domain in two sub-domains. The exact representation of such setups with multiple independent and time depending geometries exceeds the functionality of common unfitted methods. We present a new approach to pore scale simulations with an unfitted discontinuous Galerkin (UDG) method. Utilizing a recursive sub-triangulation algorithm, we extent the UDG method to setups with multiple independent geometries. This approach allows an accurate representation of the moving contact line and the interface conditions, i.e. the pressure jump across the interface. Example simulations in two and three dimensions illustrate and verify the stability and accuracy of this approach.

  14. Massively parallel simulator of optical coherence tomography of inhomogeneous turbid media.

    PubMed

    Malektaji, Siavash; Lima, Ivan T; Escobar I, Mauricio R; Sherif, Sherif S

    2017-10-01

    An accurate and practical simulator for Optical Coherence Tomography (OCT) could be an important tool to study the underlying physical phenomena in OCT such as multiple light scattering. Recently, many researchers have investigated simulation of OCT of turbid media, e.g., tissue, using Monte Carlo methods. The main drawback of these earlier simulators is the long computational time required to produce accurate results. We developed a massively parallel simulator of OCT of inhomogeneous turbid media that obtains both Class I diffusive reflectivity, due to ballistic and quasi-ballistic scattered photons, and Class II diffusive reflectivity due to multiply scattered photons. This Monte Carlo-based simulator is implemented on graphic processing units (GPUs), using the Compute Unified Device Architecture (CUDA) platform and programming model, to exploit the parallel nature of propagation of photons in tissue. It models an arbitrary shaped sample medium as a tetrahedron-based mesh and uses an advanced importance sampling scheme. This new simulator speeds up simulations of OCT of inhomogeneous turbid media by about two orders of magnitude. To demonstrate this result, we have compared the computation times of our new parallel simulator and its serial counterpart using two samples of inhomogeneous turbid media. We have shown that our parallel implementation reduced simulation time of OCT of the first sample medium from 407 min to 92 min by using a single GPU card, to 12 min by using 8 GPU cards and to 7 min by using 16 GPU cards. For the second sample medium, the OCT simulation time was reduced from 209 h to 35.6 h by using a single GPU card, and to 4.65 h by using 8 GPU cards, and to only 2 h by using 16 GPU cards. Therefore our new parallel simulator is considerably more practical to use than its central processing unit (CPU)-based counterpart. Our new parallel OCT simulator could be a practical tool to study the different physical phenomena underlying OCT, or to design OCT systems with improved performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. High Temperature Life Testing of 80Ni-20Cr Wire in a Simulated Mars Atmosphere for the Sample Analysis at Mars (SAM) Instrument Suite Gas Processing System (GPS) Carbon Dioxide Scrubber

    NASA Technical Reports Server (NTRS)

    Hoffman, Christopher; Munoz, Bruno; Gundersen, Cynthia; Thomas, Walter, III; Stephenson, Timothy

    2008-01-01

    In support of the GPS for the SAM instrument suite built by NASA/GSFC, a life test facility was developed to test the suitability of 80Ni-20Cr alloy wire, 0.0142 cm diameter, for use as a heater element for the carbon dioxide scrubber. The element would be required to operate at 1000 C in order to attain the 800 C required for regeneration of the getter. The element also would need to operate in the Mars atmosphere, which consists mostly of CO2 at pressures between 4 and 12 torr. Data on the high temperature degradation mechanism of 80Ni- 20Cr in low pressure CO2, coupled with the effects of thermal cycling, were unknown. In addition, the influence of work hardening of the wire during assembly and the potential for catastrophic grain growth also were unknown. Verification of the element reliability as defined by the mission goals required the construction of a test facility that would accurately simulate the duty cycles in a simulated Mars atmosphere. The experimental set-up, along with the test protocol and results will be described.

  16. Minimum requirements for predictive pore-network modeling of solute transport in micromodels

    NASA Astrophysics Data System (ADS)

    Mehmani, Yashar; Tchelepi, Hamdi A.

    2017-10-01

    Pore-scale models are now an integral part of analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Pore network models (PNM) are particularly attractive due to their computational efficiency. However, quantitative predictions with PNM have not always been successful. We focus on single-phase transport of a passive tracer under advection-dominated regimes and compare PNM with high-fidelity direct numerical simulations (DNS) for a range of micromodel heterogeneities. We identify the minimum requirements for predictive PNM of transport. They are: (a) flow-based network extraction, i.e., discretizing the pore space based on the underlying velocity field, (b) a Lagrangian (particle tracking) simulation framework, and (c) accurate transfer of particles from one pore throat to the next. We develop novel network extraction and particle tracking PNM methods that meet these requirements. Moreover, we show that certain established PNM practices in the literature can result in first-order errors in modeling advection-dominated transport. They include: all Eulerian PNMs, networks extracted based on geometric metrics only, and flux-based nodal transfer probabilities. Preliminary results for a 3D sphere pack are also presented. The simulation inputs for this work are made public to serve as a benchmark for the research community.

  17. Water Hammer Simulations of Monomethylhydrazine Propellant

    NASA Technical Reports Server (NTRS)

    Burkhardt, Zachary; Ramachandran, N.; Majumdar, A.

    2017-01-01

    Fluid Transient analysis is important for the design of spacecraft propulsion system to ensure structural stability of the system in the event of sudden closing or opening of the valve. Generalized Fluid System Simulation Program (GFSSP), a general purpose flow network code developed at NASA/MSFC is capable of simulating pressure surge due to sudden opening or closing of valve when thermodynamic properties of real fluid are available for the entire range of simulation. Specifically GFSSP needs an accurate representation of pressure density relationship in order to predict pressure surge during a fluid transient. Unfortunately, the available thermodynamic property programs such as REFPROP, GASP or GASPAK do not provide the thermodynamic properties of Monomethylhydrazine(MMH). This work illustrates the process used for building a customized table of properties of state variables from available properties and speed of sound that is required by GFSSP for simulation. Good agreement was found between the simulations and measured data. This method can be adopted for modeling flow networks and systems with other fluids whose properties are not known in detail in order to obtain general technical insight.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Som, Sibendu; Wang, Zihan; Pei, Yuanjiang

    A state-of-the-art spray modeling methodology, recently presented by Senecal et al. [ , , ], is applied to Large Eddy Simulations (LES) of vaporizing gasoline sprays. Simulations of non-combusting Spray G (gasoline fuel) from the Engine Combustion Network are performed. Adaptive mesh refinement (AMR) with cell sizes from 0.09 mm to 0.5 mm are utilized to further demonstrate grid convergence of the dynamic structure LES model for the gasoline sprays. Grid settings are recommended to optimize the accuracy/runtime tradeoff for LES-based spray simulations at different injection pressure conditions typically encountered in gasoline direct injection (GDI) applications. The influence of LESmore » sub-grid scale (SGS) models is explored by comparing the results from dynamic structure and Smagorinsky based models against simulations without any SGS model. Twenty different realizations are simulated by changing the random number seed used in the spray sub-models. It is shown that for global quantities such as spray penetration, comparing a single LES simulation to experimental data is reasonable. Through a detailed analysis using the relevance index (RI) criteria, recommendations are made regarding the minimum number of LES realizations required for accurate prediction of the gasoline sprays.« less

  19. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  20. Projected Irrigation Requirement Under Climate Change in Korean Peninsula by Apply Global Hydrologic Model to Local Scale.

    NASA Astrophysics Data System (ADS)

    Yang, B.; Lee, D. K.

    2016-12-01

    Understanding spatial distribution of irrigation requirement is critically important for agricultural water management. However, many studies considering future agricultural water management in Korea assessed irrigation requirement on watershed or administrative district scale, but have not accounted the spatial distribution. Lumped hydrologic model has typically used in Korea for simulating watershed scale irrigation requirement, while distribution hydrologic model can simulate the spatial distribution grid by grid. To overcome this shortcoming, here we applied a grid base global hydrologic model (H08) into local scale to estimate spatial distribution under future irrigation requirement of Korean Peninsula. Korea is one of the world's most densely populated countries, with also high produce and demand of rice which requires higher soil moisture than other crops. Although, most of the precipitation concentrate in particular season and disagree with crop growth season. This precipitation character makes management of agricultural water which is approximately 60% of total water usage critical issue in Korea. Furthermore, under future climate change, the precipitation predicted to be more concentrated and necessary need change of future water management plan. In order to apply global hydrological model into local scale, we selected appropriate major crops under social and local climate condition in Korea to estimate cropping area and yield, and revised the cropping area map more accurately. As a result, future irrigation requirement estimation varies under each projection, however, slightly decreased in most case. The simulation reveals, evapotranspiration increase slightly while effective precipitation also increase to balance the irrigation requirement. This finding suggest practical guideline to decision makers for further agricultural water management plan including future development of water supply plan to resolve water scarcity.

  1. Structured Overlapping Grid Simulations of Contra-rotating Open Rotor Noise

    NASA Technical Reports Server (NTRS)

    Housman, Jeffrey A.; Kiris, Cetin C.

    2015-01-01

    Computational simulations using structured overlapping grids with the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for predicting tonal noise generated by a contra-rotating open rotor (CROR) propulsion system. A coupled Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) numerical approach is applied. Three-dimensional time-accurate hybrid Reynolds Averaged Navier-Stokes/Large Eddy Simulation (RANS/LES) CFD simulations are performed in the inertial frame, including dynamic moving grids, using a higher-order accurate finite difference discretization on structured overlapping grids. A higher-order accurate free-stream preserving metric discretization with discrete enforcement of the Geometric Conservation Law (GCL) on moving curvilinear grids is used to create an accurate, efficient, and stable numerical scheme. The aeroacoustic analysis is based on a permeable surface Ffowcs Williams-Hawkings (FW-H) approach, evaluated in the frequency domain. A time-step sensitivity study was performed using only the forward row of blades to determine an adequate time-step. The numerical approach is validated against existing wind tunnel measurements.

  2. Optimization of metabolite basis sets prior to quantitation in magnetic resonance spectroscopy: an approach based on quantum mechanics

    NASA Astrophysics Data System (ADS)

    Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.

    2011-11-01

    High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.

  3. Development of Dimensionless Surge Response Functions for Hazard Assessment at Panama City, Florida

    NASA Astrophysics Data System (ADS)

    Taylor, N. R.; Irish, J. L.; Hagen, S. C.; Kaihatu, J. M.; McLaughlin, P. W.

    2013-12-01

    Reliable and robust methods of extreme value analysis in hurricane surge forecasting are of high importance in the coastal engineering profession. The Joint Probability Method (JPM) has become the preferred statistical method over the Historical Surge Population (HSP) method, due to its ability to give more accurate surge predictions, as demonstrated by Irish et. al in 2011 (J. Geophys. Res.). One disadvantage to this method is its high computational cost; a single location can require hundreds of simulated storms, each needing one thousand computational hours or more to complete. One way of overcoming this issue is to use an interpolating function, called a surge response function, to reduce the required number of simulations to a manageable number. These sampling methods, which use physical scaling laws, have been shown to significantly reduce the number of simulated storms needed for application of the JPM method. In 2008, Irish et. al. (J. Phys. Oceanogr.) demonstrated that hurricane surge scales primarily as a function of storm size and intensity. Additionally, Song et. al. in 2012 (Nat. Hazards) has shown that surge response functions incorporating bathymetric variations yield highly accurate surge estimates along the Texas coastline. This study applies the Song. et. al. model to 73 stations along the open coast, and 273 stations within the bays, in Panama City, Florida. The model performs well for the open coast and bay areas; surge levels at most stations along the open coast were predicted with RMS errors below 0.40 meters, and R2 values at or above 0.80. The R2 values for surge response functions within bays were consistently at or above 0.75. Surge levels at most stations within the North Bay and East Bay were predicted with RMS errors below 0.40 meters; within the West Bay, surge was predicted with RMS errors below 0.52 meters. Accurately interpolating surge values along the Panama City coast and bays enables efficient use of the JPM model in order to develop reliable probabilistic surge estimates for use in planning and design for hurricane mitigation.

  4. Optimized Finite Difference Method for the Full-Potential XANES Simulations: Application to Molecular Adsorption Geometries in MOFs and Metal-Ligand Intersystem Crossing Transients.

    PubMed

    Guda, Sergey A; Guda, Alexander A; Soldatov, Mikhail A; Lomachenko, Kirill A; Bugaev, Aram L; Lamberti, Carlo; Gawelda, Wojciech; Bressler, Christian; Smolentsev, Grigory; Soldatov, Alexander V; Joly, Yves

    2015-09-08

    Accurate modeling of the X-ray absorption near-edge spectra (XANES) is required to unravel the local structure of metal sites in complex systems and their structural changes upon chemical or light stimuli. Two relevant examples are reported here concerning the following: (i) the effect of molecular adsorption on 3d metals hosted inside metal-organic frameworks and (ii) light induced dynamics of spin crossover in metal-organic complexes. In both cases, the amount of structural models for simulation can reach a hundred, depending on the number of structural parameters. Thus, the choice of an accurate but computationally demanding finite difference method for the ab initio X-ray absorption simulations severely restricts the range of molecular systems that can be analyzed by personal computers. Employing the FDMNES code [Phys. Rev. B, 2001, 63, 125120] we show that this problem can be handled if a proper diagonalization scheme is applied. Due to the use of dedicated solvers for sparse matrices, the calculation time was reduced by more than 1 order of magnitude compared to the standard Gaussian method, while the amount of required RAM was halved. Ni K-edge XANES simulations performed by the accelerated version of the code allowed analyzing the coordination geometry of CO and NO on the Ni active sites in CPO-27-Ni MOF. The Ni-CO configuration was found to be linear, while Ni-NO was bent by almost 90°. Modeling of the Fe K-edge XANES of photoexcited aqueous [Fe(bpy)3](2+) with a 100 ps delay we identified the Fe-N distance elongation and bipyridine rotation upon transition from the initial low-spin to the final high-spin state. Subsequently, the X-ray absorption spectrum for the intermediate triplet state with expected 100 fs lifetime was theoretically predicted.

  5. The Spatial Resolution in the Computer Modelling of Atmospheric Flow over a Double-Hill Forested Region

    NASA Astrophysics Data System (ADS)

    Palma, J. L.; Rodrigues, C. V.; Lopes, A. S.; Carneiro, A. M. C.; Coelho, R. P. C.; Gomes, V. C.

    2017-12-01

    With the ever increasing accuracy required from numerical weather forecasts, there is pressure to increase the resolution and fidelity employed in computational micro-scale flow models. However, numerical studies of complex terrain flows are fundamentally bound by the digital representation of the terrain and land cover. This work assess the impact of the surface description on micro-scale simulation results at a highly complex site in Perdigão, Portugal, characterized by a twin parallel ridge topography, densely forested areas and an operating wind turbine. Although Coriolis and stratification effects cannot be ignored, the study is done under neutrally stratified atmosphere and static inflow conditions. The understanding gained here will later carry over to WRF-coupled simulations, where those conditions do not apply and the flow physics is more accurately modelled. With access to very fine digital mappings (<1m horizontal resolution) of both topography and land cover (roughness and canopy cover, both obtained through aerial LIDAR scanning of the surface) the impact of each element of the surface description on simulation results can be individualized, in order to estimate the resolution required to satisfactorily resolve them. Starting from the bare topographic description, in its coursest form, these include: a) the surface roughness mapping, b) the operating wind turbine, c) the canopy cover, as either body forces or added surface roughness (akin to meso-scale modelling), d) high resolution topography and surface cover mapping. Each of these individually will have an impact near the surface, including the rotor swept area of modern wind turbines. Combined they will considerably change flow up to boundary layer heights. Sensitivity to these elements cannot be generalized and should be assessed case-by-case. This type of in-depth study, unfeasible using WRF-coupled simulations, should provide considerable insight when spatially allocating mesh resolution for accurate resolution of complex flows.

  6. Rapid Monte Carlo simulation of detector DQE(f)

    PubMed Central

    Star-Lack, Josh; Sun, Mingshan; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca; Abel, Eric

    2014-01-01

    Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 107 − 109 detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation performed using clinical fluence levels. Results: On the order of only 10–100 gamma photons per flood image were required to be detected to avoid biasing the NPS estimate. This allowed for a factor of 107 reduction in fluence compared to clinical levels with no loss of accuracy. An optimal signal-to-noise ratio (SNR) was achieved by increasing the number of flood images from a typical value of 100 up to 500, thereby illustrating the importance of flood image quantity over the number of gammas per flood. For the point-spread ensemble technique, an additional 2× reduction in the number of incident gammas was realized. As a result, when modeling gamma transport in a thick pixelated array, the simulation time was reduced from 2.5 × 106 CPU min if using clinical fluence levels to 3.1 CPU min if using optimized fluence levels while also producing a higher SNR. The AS1000 DQE(f) simulation entailing both optical and radiative transport matched experimental results to within 11%, and required 14.5 min to complete on a single CPU. Conclusions: The authors demonstrate the feasibility of accurately modeling x-ray detector DQE(f) with completion times on the order of several minutes using a single CPU. Convenience of simulation can be achieved using GEANT4 which offers both gamma and optical photon transport capabilities. PMID:24593734

  7. Rapid Monte Carlo simulation of detector DQE(f)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Star-Lack, Josh, E-mail: josh.starlack@varian.com; Sun, Mingshan; Abel, Eric

    2014-03-15

    Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 10{supmore » 7} − 10{sup 9} detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation performed using clinical fluence levels. Results: On the order of only 10–100 gamma photons per flood image were required to be detected to avoid biasing the NPS estimate. This allowed for a factor of 10{sup 7} reduction in fluence compared to clinical levels with no loss of accuracy. An optimal signal-to-noise ratio (SNR) was achieved by increasing the number of flood images from a typical value of 100 up to 500, thereby illustrating the importance of flood image quantity over the number of gammas per flood. For the point-spread ensemble technique, an additional 2× reduction in the number of incident gammas was realized. As a result, when modeling gamma transport in a thick pixelated array, the simulation time was reduced from 2.5 × 10{sup 6} CPU min if using clinical fluence levels to 3.1 CPU min if using optimized fluence levels while also producing a higher SNR. The AS1000 DQE(f) simulation entailing both optical and radiative transport matched experimental results to within 11%, and required 14.5 min to complete on a single CPU. Conclusions: The authors demonstrate the feasibility of accurately modeling x-ray detector DQE(f) with completion times on the order of several minutes using a single CPU. Convenience of simulation can be achieved using GEANT4 which offers both gamma and optical photon transport capabilities.« less

  8. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: simulation methodology and MTF optimization

    NASA Astrophysics Data System (ADS)

    Schuster, J.

    2018-02-01

    Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.

  9. Modeling variably saturated subsurface solute transport with MODFLOW-UZF and MT3DMS

    USGS Publications Warehouse

    Morway, Eric D.; Niswonger, Richard G.; Langevin, Christian D.; Bailey, Ryan T.; Healy, Richard W.

    2013-01-01

    The MT3DMS groundwater solute transport model was modified to simulate solute transport in the unsaturated zone by incorporating the unsaturated-zone flow (UZF1) package developed for MODFLOW. The modified MT3DMS code uses a volume-averaged approach in which Lagrangian-based UZF1 fluid fluxes and storage changes are mapped onto a fixed grid. Referred to as UZF-MT3DMS, the linked model was tested against published benchmarks solved analytically as well as against other published codes, most frequently the U.S. Geological Survey's Variably-Saturated Two-Dimensional Flow and Transport Model. Results from a suite of test cases demonstrate that the modified code accurately simulates solute advection, dispersion, and reaction in the unsaturated zone. Two- and three-dimensional simulations also were investigated to ensure unsaturated-saturated zone interaction was simulated correctly. Because the UZF1 solution is analytical, large-scale flow and transport investigations can be performed free from the computational and data burdens required by numerical solutions to Richards' equation. Results demonstrate that significant simulation runtime savings can be achieved with UZF-MT3DMS, an important development when hundreds or thousands of model runs are required during parameter estimation and uncertainty analysis. Three-dimensional variably saturated flow and transport simulations revealed UZF-MT3DMS to have runtimes that are less than one tenth of the time required by models that rely on Richards' equation. Given its accuracy and efficiency, and the wide-spread use of both MODFLOW and MT3DMS, the added capability of unsaturated-zone transport in this familiar modeling framework stands to benefit a broad user-ship.

  10. Modeling variably saturated subsurface solute transport with MODFLOW-UZF and MT3DMS.

    PubMed

    Morway, Eric D; Niswonger, Richard G; Langevin, Christian D; Bailey, Ryan T; Healy, Richard W

    2013-03-01

    The MT3DMS groundwater solute transport model was modified to simulate solute transport in the unsaturated zone by incorporating the unsaturated-zone flow (UZF1) package developed for MODFLOW. The modified MT3DMS code uses a volume-averaged approach in which Lagrangian-based UZF1 fluid fluxes and storage changes are mapped onto a fixed grid. Referred to as UZF-MT3DMS, the linked model was tested against published benchmarks solved analytically as well as against other published codes, most frequently the U.S. Geological Survey's Variably-Saturated Two-Dimensional Flow and Transport Model. Results from a suite of test cases demonstrate that the modified code accurately simulates solute advection, dispersion, and reaction in the unsaturated zone. Two- and three-dimensional simulations also were investigated to ensure unsaturated-saturated zone interaction was simulated correctly. Because the UZF1 solution is analytical, large-scale flow and transport investigations can be performed free from the computational and data burdens required by numerical solutions to Richards' equation. Results demonstrate that significant simulation runtime savings can be achieved with UZF-MT3DMS, an important development when hundreds or thousands of model runs are required during parameter estimation and uncertainty analysis. Three-dimensional variably saturated flow and transport simulations revealed UZF-MT3DMS to have runtimes that are less than one tenth of the time required by models that rely on Richards' equation. Given its accuracy and efficiency, and the wide-spread use of both MODFLOW and MT3DMS, the added capability of unsaturated-zone transport in this familiar modeling framework stands to benefit a broad user-ship. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.

  11. Goal-directed transthoracic echocardiography during advanced cardiac life support: A pilot study using simulation to assess ability

    PubMed Central

    Greenstein, Yonatan Y.; Martin, Thomas J.; Rolnitzky, Linda; Felner, Kevin; Kaufman, Brian

    2015-01-01

    Introduction Goal-directed echocardiography (GDE) is used to answer specific clinical questions which provide invaluable information to physicians managing a hemodynamically unstable patient. We studied perception and ability of housestaff previously trained in GDE to accurately diagnose common causes of cardiac arrest during simulated advanced cardiac life support (ACLS); we compared their results to those of expert echocardiographers. Methods Eleven pulmonary and critical care medicine fellows, seven emergency medicine residents, and five cardiologists board-certified in echocardiography were enrolled. Baseline ability to acquire four transthoracic echocardiography views was assessed and participants were exposed to six simulated cardiac arrests and were asked to perform a GDE during ACLS. Housestaff performance was compared to the performance of five expert echocardiographers. Results Average baseline and scenario views by housestaff were of good or excellent quality 89% and 83% of the time, respectively. Expert average baseline and scenario views were always of good or excellent quality. Housestaff and experts made the correct diagnosis in 68% and 77% of cases, respectively. On average, participants required 1.5 pulse checks to make the correct diagnosis. 94% of housestaff perceived this study as an accurate assessment of ability. Conclusions In an ACLS compliant manner, housestaff are capable of diagnosing management altering pathologies the majority of the time and they reach similar diagnostic conclusions in the same amount of time as expert echocardiographers in a simulated cardiac arrest scenario. PMID:25932707

  12. Goal-Directed Transthoracic Echocardiography During Advanced Cardiac Life Support: A Pilot Study Using Simulation to Assess Ability.

    PubMed

    Greenstein, Yonatan Y; Martin, Thomas J; Rolnitzky, Linda; Felner, Kevin; Kaufman, Brian

    2015-08-01

    Goal-directed echocardiography (GDE) is used to answer specific clinical questions that provide invaluable information to physicians managing a hemodynamically unstable patient. We studied perception and ability of house staff previously trained in GDE to accurately diagnose common causes of cardiac arrest during simulated advanced cardiac life support (ACLS); we compared their results with those of expert echocardiographers. Eleven pulmonary and critical care medicine fellows, 7 emergency medicine residents, and 5 cardiologists board certified in echocardiography were enrolled. Baseline ability to acquire 4 transthoracic echocardiography views was assessed, and participants were exposed to 6 simulated cardiac arrests and were asked to perform a GDE during ACLS. House staff performance was compared with the performance of 5 expert echocardiographers. Average baseline and scenario views by house staff were of good or excellent quality 89% and 83% of the time, respectively. Expert average baseline and scenario views were always of good or excellent quality. House staff and experts made the correct diagnosis in 68% and 77% of cases, respectively. On average, participants required 1.5 pulse checks to make the correct diagnosis. Of house staff, 94% perceived this study as an accurate assessment of ability. In an ACLS-compliant manner, house staff are capable of diagnosing management-altering pathologies the majority of the time, and they reach similar diagnostic conclusions in the same amount of time as expert echocardiographers in a simulated cardiac arrest scenario.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunayama, Tomomi; Padmanabhan, Nikhil; Heitmann, Katrin

    Precision measurements of the large scale structure of the Universe require large numbers of high fidelity mock catalogs to accurately assess, and account for, the presence of systematic effects. We introduce and test a scheme for generating mock catalogs rapidly using suitably derated N-body simulations. Our aim is to reproduce the large scale structure and the gross properties of dark matter halos with high accuracy, while sacrificing the details of the halo's internal structure. By adjusting global and local time-steps in an N-body code, we demonstrate that we recover halo masses to better than 0.5% and the power spectrum tomore » better than 1% both in real and redshift space for k =1 h Mpc{sup −1}, while requiring a factor of 4 less CPU time. We also calibrate the redshift spacing of outputs required to generate simulated light cones. We find that outputs separated by Δ z =0.05 allow us to interpolate particle positions and velocities to reproduce the real and redshift space power spectra to better than 1% (out to k =1 h Mpc{sup −1}). We apply these ideas to generate a suite of simulations spanning a range of cosmologies, motivated by the Baryon Oscillation Spectroscopic Survey (BOSS) but broadly applicable to future large scale structure surveys including eBOSS and DESI. As an initial demonstration of the utility of such simulations, we calibrate the shift in the baryonic acoustic oscillation peak position as a function of galaxy bias with higher precision than has been possible so far. This paper also serves to document the simulations, which we make publicly available.« less

  14. Two dimensional model for coherent synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Huang, Chengkun; Kwan, Thomas J. T.; Carlsten, Bruce E.

    2013-01-01

    Understanding coherent synchrotron radiation (CSR) effects in a bunch compressor requires an accurate model accounting for the realistic beam shape and parameters. We extend the well-known 1D CSR analytic model into two dimensions and develop a simple numerical model based on the Liénard-Wiechert formula for the CSR field of a coasting beam. This CSR numerical model includes the 2D spatial dependence of the field in the bending plane and is accurate for arbitrary beam energy. It also removes the singularity in the space charge field calculation present in a 1D model. Good agreement is obtained with 1D CSR analytic result for free electron laser (FEL) related beam parameters but it can also give a more accurate result for low-energy/large spot size beams and off-axis/transient fields. This 2D CSR model can be used for understanding the limitation of various 1D models and for benchmarking fully electromagnetic multidimensional particle-in-cell simulations for self-consistent CSR modeling.

  15. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  16. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundquist, J K; Mirocha, J D; Chow, F K

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting thatmore » a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.« less

  17. Resist Parameter Extraction from Line-and-Space Patterns of Chemically Amplified Resist for Extreme Ultraviolet Lithography

    NASA Astrophysics Data System (ADS)

    Kozawa, Takahiro; Oizumi, Hiroaki; Itani, Toshiro; Tagawa, Seiichi

    2010-11-01

    The development of extreme ultraviolet (EUV) lithography has progressed owing to worldwide effort. As the development status of EUV lithography approaches the requirements for the high-volume production of semiconductor devices with a minimum line width of 22 nm, the extraction of resist parameters becomes increasingly important from the viewpoints of the accurate evaluation of resist materials for resist screening and the accurate process simulation for process and mask designs. In this study, we demonstrated that resist parameters (namely, quencher concentration, acid diffusion constant, proportionality constant of line edge roughness, and dissolution point) can be extracted from the scanning electron microscopy (SEM) images of patterned resists without the knowledge on the details of resist contents using two types of latest EUV resist.

  18. Mobile robot trajectory tracking using noisy RSS measurements: an RFID approach.

    PubMed

    Miah, M Suruz; Gueaieb, Wail

    2014-03-01

    Most RF beacons-based mobile robot navigation techniques rely on approximating line-of-sight (LOS) distances between the beacons and the robot. This is mostly performed using the robot's received signal strength (RSS) measurements from the beacons. However, accurate mapping between the RSS measurements and the LOS distance is almost impossible to achieve in reverberant environments. This paper presents a partially-observed feedback controller for a wheeled mobile robot where the feedback signal is in the form of noisy RSS measurements emitted from radio frequency identification (RFID) tags. The proposed controller requires neither an accurate mapping between the LOS distance and the RSS measurements, nor the linearization of the robot model. The controller performance is demonstrated through numerical simulations and real-time experiments. ©2013 Published by ISA. All rights reserved.

  19. Toward high-speed 3D nonlinear soft tissue deformation simulations using Abaqus software.

    PubMed

    Idkaidek, Ashraf; Jasiuk, Iwona

    2015-12-01

    We aim to achieve a fast and accurate three-dimensional (3D) simulation of a porcine liver deformation under a surgical tool pressure using the commercial finite element software Abaqus. The liver geometry is obtained using magnetic resonance imaging, and a nonlinear constitutive law is employed to capture large deformations of the tissue. Effects of implicit versus explicit analysis schemes, element type, and mesh density on computation time are studied. We find that Abaqus explicit and implicit solvers are capable of simulating nonlinear soft tissue deformations accurately using first-order tetrahedral elements in a relatively short time by optimizing the element size. This study provides new insights and guidance on accurate and relatively fast nonlinear soft tissue simulations. Such simulations can provide force feedback during robotic surgery and allow visualization of tissue deformations for surgery planning and training of surgical residents.

  20. Modeling Piezoelectric Stack Actuators for Control of Micromanipulation

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Celanovic, Nikola

    1997-01-01

    A nonlinear lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and, in particular, for microrobotic applications requiring accurate position and/or force control. In formulating this model, the authors propose a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data. Validation is followed by a discussion of model implications for purposes of actuator control.

  1. Fracturing And Liquid CONvection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-02-29

    FALCON has been developed to enable simulation of the tightly coupled fluid-rock behavior in hydrothermal and engineered geothermal system (EGS) reservoirs, targeting the dynamics of fracture stimulation, fluid flow, rock deformation, and heat transport in a single integrated code, with the ultimate goal of providing a tool that can be used to test the viability of EGS in the United States and worldwide. Reliable reservoir performance predictions of EGS systems require accurate and robust modeling for the coupled thermal­hydrological­mechanical processes.

  2. Applicability of APT aided-inertial system to crustal movement monitoring

    NASA Technical Reports Server (NTRS)

    Soltz, J. A.

    1978-01-01

    The APT system, its stage of development, hardware, and operations are described. The algorithms required to perform the real-time functions of navigation and profiling are presented. The results of computer simulations demonstrate the feasibility of APT for its primary mission: topographic mapping with an accuracy of 15 cm in the vertical. Also discussed is the suitability of modifying APT for the purpose of making vertical crustal movement measurements accurate to 2 cm in the vertical, and at least marginal feasibility is indicated.

  3. Translation position determination in ptychographic coherent diffraction imaging.

    PubMed

    Zhang, Fucai; Peterson, Isaac; Vila-Comamala, Joan; Diaz, Ana; Berenguer, Felisa; Bean, Richard; Chen, Bo; Menzel, Andreas; Robinson, Ian K; Rodenburg, John M

    2013-06-03

    Accurate knowledge of translation positions is essential in ptychography to achieve a good image quality and the diffraction limited resolution. We propose a method to retrieve and correct position errors during the image reconstruction iterations. Sub-pixel position accuracy after refinement is shown to be achievable within several tens of iterations. Simulation and experimental results for both optical and X-ray wavelengths are given. The method improves both the quality of the retrieved object image and relaxes the position accuracy requirement while acquiring the diffraction patterns.

  4. High-resolution, detailed simulations of low foot and high foot implosion experiments on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Clark, Daniel

    2015-11-01

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensional (3-D) character of the flow, accurately modeling NIF implosions remains at the edge of current radiation hydrodynamics simulation capabilities. This talk describes the current state of progress of 3-D, high-resolution, capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. Most importantly, it is found that a single, standard simulation methodology appears adequate to model both implosion types and gives confidence that such a model can be used to guide future implosion designs toward ignition. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. Virtual reality neurosurgery: a simulator blueprint.

    PubMed

    Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J

    2004-04-01

    This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.

  6. Accelerating Sequential Gaussian Simulation with a constant path

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  7. Fast and accurate mock catalogue generation for low-mass galaxies

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe

    2016-06-01

    We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.

  8. Low-dimensional modelling of a transient cylinder wake using double proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Siegel, Stefan G.; Seidel, J.?Rgen; Fagley, Casey; Luchtenburg, D. M.; Cohen, Kelly; McLaughlin, Thomas

    For the systematic development of feedback flow controllers, a numerical model that captures the dynamic behaviour of the flow field to be controlled is required. This poses a particular challenge for flow fields where the dynamic behaviour is nonlinear, and the governing equations cannot easily be solved in closed form. This has led to many versions of low-dimensional modelling techniques, which we extend in this work to represent better the impact of actuation on the flow. For the benchmark problem of a circular cylinder wake in the laminar regime, we introduce a novel extension to the proper orthogonal decomposition (POD) procedure that facilitates mode construction from transient data sets. We demonstrate the performance of this new decomposition by applying it to a data set from the development of the limit cycle oscillation of a circular cylinder wake simulation as well as an ensemble of transient forced simulation results. The modes obtained from this decomposition, which we refer to as the double POD (DPOD) method, correctly track the changes of the spatial modes both during the evolution of the limit cycle and when forcing is applied by transverse translation of the cylinder. The mode amplitudes, which are obtained by projecting the original data sets onto the truncated DPOD modes, can be used to construct a dynamic mathematical model of the wake that accurately predicts the wake flow dynamics within the lock-in region at low forcing amplitudes. This low-dimensional model, derived using nonlinear artificial neural network based system identification methods, is robust and accurate and can be used to simulate the dynamic behaviour of the wake flow. We demonstrate this ability not just for unforced and open-loop forced data, but also for a feedback-controlled simulation that leads to a 90% reduction in lift fluctuations. This indicates the possibility of constructing accurate dynamic low-dimensional models for feedback control by using unforced and transient forced data only.

  9. The Importance of Detailed Component Simulations in the Feedsystem Development for a Two-Stage-to Orbit Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Mazurkivich, Pete; Chandler, Frank; Grayson, Gary

    2005-01-01

    To meet the requirements for the 2nd Generation Reusable Launch Vehicle (RLV), a unique propulsion feed system concept was identified using crossfeed between the booster and orbiter stages that could reduce the Two-Stage-to-Orbit (TSTO) vehicle weight and development cost by approximately 25%. A Main Propulsion System (MPS) crossfeed water demonstration test program was configured to address all the activities required to reduce the risks for the MPS crossfeed system. A transient, one-dimensional system simulation was developed for the subscale crossfeed water flow tests. To ensure accurate representation of the crossfeed valve's dynamics in the system model, a high-fidelity, three-dimensional, computational fluid-dynamics (CFD) model was employed. The results from the CFD model were used to specify the valve's flow characteristics in the system simulation. This yielded a crossfeed system model that was anchored to the specific valve hardware and achieved good agreement with the measured test data. These results allowed the transient models to be correlated and validated and used for full scale mission predictions. The full scale model simulations indicate crossfeed is ' viable with the system pressure disturbances at the crossfeed transition being less than experienced by the propulsion system during engine start and shutdown transients.

  10. Evolutionary online behaviour learning and adaptation in real robots.

    PubMed

    Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne

    2017-07-01

    Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.

  11. Molecular dynamics studies of transport properties and equation of state of supercritical fluids

    NASA Astrophysics Data System (ADS)

    Nwobi, Obika C.

    Many chemical propulsion systems operate with one or more of the reactants above the critical point in order to enhance their performance. Most of the computational fluid dynamics (CFD) methods used to predict these flows require accurate information on the transport properties and equation of state at these supercritical conditions. This work involves the determination of transport coefficients and equation of state of supercritical fluids by equilibrium molecular dynamics (MD) simulations on parallel computers using the Green-Kubo formulae and the virial equation of state, respectively. MD involves the solution of equations of motion of a system of molecules that interact with each other through an intermolecular potential. Provided that an accurate potential can be found for the system of interest, MD can be used regardless of the phase and thermodynamic conditions of the substances involved. The MD program uses the effective Lennard-Jones potential, with system sizes of 1000-1200 molecules and, simulations of 2,000,000 time-steps for computing transport coefficients and 200,000 time-steps for pressures. The computer code also uses linked cell lists for efficient sorting of molecules, periodic boundary conditions, and a modified velocity Verlet algorithm for particle displacement. Particle decomposition is used for distributing the molecules to different processors of a parallel computer. Simulations have been carried out on pure argon, nitrogen, oxygen and ethylene at various supercritical conditions, with self-diffusion coefficients, shear viscosity coefficients, thermal conductivity coefficients and pressures computed for most of the conditions. Results compare well with experimental and the National Institute of Standards and Technology (NIST) values. The results show that the number of molecules and the potential cut-off radius have no significant effect on the computed coefficients, while long-time integration is necessary for accurate determination of the coefficients.

  12. A high-order strong stability preserving Runge-Kutta method for three-dimensional full waveform modeling and inversion of anelastic models

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.

    2017-12-01

    Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.

  13. Applying Required Navigation Performance Concept for Traffic Management of Small Unmanned Aircraft Systems

    NASA Technical Reports Server (NTRS)

    Jung, Jaewoo; D'Souza, Sarah N.; Johnson, Marcus A.; Ishihara, Abraham K.; Modi, Hemil C.; Nikaido, Ben; Hasseeb, Hashmatullah

    2016-01-01

    In anticipation of a rapid increase in the number of civil Unmanned Aircraft System(UAS) operations, NASA is researching prototype technologies for a UAS Traffic Management (UTM) system that will investigate airspace integration requirements for enabling safe, efficient low-altitude operations. One aspect a UTM system must consider is the correlation between UAS operations (such as vehicles, operation areas and durations), UAS performance requirements, and the risk to people and property in the operational area. This paper investigates the potential application of the International Civil Aviation Organizations (ICAO) Required Navigation Performance (RNP) concept to relate operational risk with trajectory conformance requirements. The approach is to first define a method to quantify operational risk and then define the RNP level requirement as a function of the operational risk. Greater operational risk corresponds to more accurate RNP level, or smaller tolerable Total System Error (TSE). Data from 19 small UAS flights are used to develop and validate a formula that defines this relationship. An approach to assessing UAS-RNP conformance capability using vehicle modeling and wind field simulation is developed to investigate how this formula may be applied in a future UTM system. The results indicate the modeled vehicles flight path is robust to the simulated wind variation, and it can meet RNP level requirements calculated by the formula. The results also indicate how vehicle-modeling fidelity may be improved to adequately verify assessed RNP level.

  14. Nuclear Thermal Rocket Element Environmental Simulator (NTREES)

    NASA Astrophysics Data System (ADS)

    Emrich, William J.

    2008-01-01

    To support a potential future development of a nuclear thermal rocket engine, a state-of-the-art non nuclear experimental test setup has been constructed to evaluate the performance characteristics of candidate fuel element materials and geometries in representative environments. The test device simulates the environmental conditions (minus the radiation) to which nuclear rocket fuel components could be subjected during reactor operation. Test articles mounted in the simulator are inductively heated in such a manner as to accurately reproduce the temperatures and heat fluxes normally expected to occur as a result of nuclear fission while at the same time being exposed to flowing hydrogen. This project is referred to as the Nuclear Thermal Rocket Element Environment Simulator or NTREES. The NTREES device is located at the Marshall Space flight Center in a laboratory which has been modified to accommodate the high powers required to heat the test articles to the required temperatures and to handle the gaseous hydrogen flow required for the tests. Other modifications to the laboratory include the installation of a nitrogen gas supply system and a cooling water supply system. During the design and construction of the facility, every effort was made to comply with all pertinent regulations to provide assurance that the facility could be operated in a safe and efficient manner. The NTREES system can currently supply up to 50 kW of inductive heating to the fuel test articles, although the facility has been sized to eventually allow test article heating levels of up to several megawatts.

  15. Nuclear Thermal Rocket Element Environmental Simulator (NTREES)

    NASA Technical Reports Server (NTRS)

    Emrich, William J., Jr.

    2008-01-01

    To support the eventual development of a nuclear thermal rocket engine, a state-of-the-art experimental test setup has been constructed to evaluate the performance characteristics of candidate fuel element materials and geometries in representative environments. The test device simulates the environmental conditions (minus the radiation) to which nuclear rocket fuel components will be subjected during reactor operation. Test articles mounted in the simulator are inductively heated in such a manner as to accurately reproduce the temperatures and heat fluxes normally expected to occur as a result of nuclear fission while at the same time being exposed to flowing hydrogen. This project is referred to as the Nuclear Thermal Rocket Element Environment Simulator or NTREES. The NTREES device is located at the Marshall Space flight Center in a laboratory which has been modified to accommodate the high powers required to heat the test articles to the required temperatures and to handle the gaseous hydrogen flow required for the tests. Other modifications to the laboratory include the installation of a nitrogen gas supply system and a cooling water supply system. During the design and construction of the facility, every effort was made to comply with all pertinent regulations to provide assurance that the facility could be operated in a safe and efficient manner. The NTREES system can currently supply up to 50 kW of inductive heating to the fuel test articles, although the facility has been sized to eventually allow test article heating levels of up to several megawatts.

  16. Direct Numerical Simulation of Complex Turbulence

    NASA Astrophysics Data System (ADS)

    Hsieh, Alan

    Direct numerical simulations (DNS) of spanwise-rotating turbulent channel flow were conducted. The data base obtained from these DNS simulations were used to investigate the turbulence generation cycle for simple and complex turbulence. For turbulent channel flow, three theoretical models concerning the formation and evolution of sublayer streaks, three-dimensional hairpin vortices and propagating plane waves were validated using visualizations from the present DNS data. The principal orthogonal decomposition (POD) method was used to verify the existence of the propagating plane waves; a new extension of the POD method was derived to demonstrate these plane waves in a spatial channel model. The analyses of coherent structures was extended to complex turbulence and used to determine the proper computational box size for a minimal flow unit (MFU) at Rob < 0.5. Proper realization of Taylor-Gortler vortices in the highly turbulent pressure region was demonstrated to be necessary for acceptably accurate MFU turbulence statistics, which required a minimum spanwise domain length Lz = pi. A dependence of MFU accuracy on Reynolds number was also discovered and MFU models required a larger domain to accurately approximate higher-Reynolds number flows. In addition, the results obtained from the DNS simulations were utilized to evaluate several turbulence closure models for momentum and thermal transport in rotating turbulent channel flow. Four nonlinear eddy viscosity turbulence models were tested and among these, Explicit Algebraic Reynolds Stress Models (EARSM) obtained the Reynolds stress distributions in best agreement with DNS data for rotational flows. The modeled pressure-strain functions of EARSM were shown to have strong influence on the Reynolds stress distributions near the wall. Turbulent heatflux distributions obtained from two explicit algebraic heat flux models consistently displayed increasing disagreement with DNS data with increasing rotation rate. Results were also obtained regarding flow control of fully-developed spatially-evolving turbulent channel flow using phononic subsurface structures. Fluid-structure interaction (FSI) simulations were conducted by attaching phononic structures to the bottom wall of a turbulent channel flow field and reduction of turbulent kinetic energy was observed for different phononic designs.

  17. Choice of no-slip curved boundary condition for lattice Boltzmann simulations of high-Reynolds-number flows.

    PubMed

    Sanjeevi, Sathish K P; Zarghami, Ahad; Padding, Johan T

    2018-04-01

    Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.

  18. Looking at the Disordered Proteins through the Computational Microscope.

    PubMed

    Das, Payel; Matysiak, Silvina; Mittal, Jeetain

    2018-05-23

    Intrinsically disordered proteins (IDPs) have attracted wide interest over the past decade due to their surprising prevalence in the proteome and versatile roles in cell physiology and pathology. A large selection of IDPs has been identified as potential targets for therapeutic intervention. Characterizing the structure-function relationship of disordered proteins is therefore an essential but daunting task, as these proteins can adapt transient structure, necessitating a new paradigm for connecting structural disorder to function. Molecular simulation has emerged as a natural complement to experiments for atomic-level characterizations and mechanistic investigations of this intriguing class of proteins. The diverse range of length and time scales involved in IDP function requires performing simulations at multiple levels of resolution. In this Outlook, we focus on summarizing available simulation methods, along with a few interesting example applications. We also provide an outlook on how these simulation methods can be further improved in order to provide a more accurate description of IDP structure, binding, and assembly.

  19. Modeling the Effects of Turbulence in Rotating Detonation Engines

    NASA Astrophysics Data System (ADS)

    Towery, Colin; Smith, Katherine; Hamlington, Peter; van Schoor, Marthinus; TESLa Team; Midé Team

    2014-03-01

    Propulsion systems based on detonation waves, such as rotating and pulsed detonation engines, have the potential to substantially improve the efficiency and power density of gas turbine engines. Numerous technical challenges remain to be solved in such systems, however, including obtaining more efficient injection and mixing of air and fuels, more reliable detonation initiation, and better understanding of the flow in the ejection nozzle. These challenges can be addressed using numerical simulations. Such simulations are enormously challenging, however, since accurate descriptions of highly unsteady turbulent flow fields are required in the presence of combustion, shock waves, fluid-structure interactions, and other complex physical processes. In this study, we performed high-fidelity three dimensional simulations of a rotating detonation engine and examined turbulent flow effects on the operation, performance, and efficiency of the engine. Along with experimental data, these simulations were used to test the accuracy of commonly-used Reynolds averaged and subgrid-scale turbulence models when applied to detonation engines. The authors gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA).

  20. Direct simulation for the instability and breakup of laminar liquid jets

    NASA Technical Reports Server (NTRS)

    Chuech, S. G.; Przekwas, A. J.; Yang, H. Q.; Gross, K. W.

    1990-01-01

    A direct numerical simulation method is described for predicting the deformation of laminar liquid jets. In the present nonlinear direct simulation, the convective term, which has been discarded in past linear analyses by Rayleigh and others, is included in the hydrodynamic equations. It is shown that only by maintaining full complexity of the nonlinear surface tension term accurate drop formation can be predicted. The continuity and momentum equations in the transient form are integrated on an adaptive grid, conforming the jet and surface wave shape. The equations, which are parabolic in time and elliptic in space, are solved by a TVD scheme with characteristic flux splitting. The results of the present work are discussed and compared with available measurements and other analyses. The comparison shows that among the predictions, the current 1-D direct simulation results agree best with the experimental data. Furthermore, the computer time requirements are much (an order of magnitude) smaller than those of previously reported multidimensional analyses.

Top