Sample records for conditional simulation method

  1. Method and system for fault accommodation of machines

    NASA Technical Reports Server (NTRS)

    Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)

    2011-01-01

    A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.

  2. Comments on ``Use of conditional simulation in nuclear waste site performance assessment`` by Carol Gotway

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downing, D.J.

    1993-10-01

    This paper discusses Carol Gotway`s paper, ``The Use of Conditional Simulation in Nuclear Waste Site Performance Assessment.`` The paper centers on the use of conditional simulation and the use of geostatistical methods to simulate an entire field of values for subsequent use in a complex computer model. The issues of sampling designs for geostatistics, semivariogram estimation and anisotropy, turning bands method for random field generation, and estimation of the comulative distribution function are brought out.

  3. Airframe Icing Research Gaps: NASA Perspective

    NASA Technical Reports Server (NTRS)

    Potapczuk, Mark

    2009-01-01

    qCurrent Airframe Icing Technology Gaps: Development of a full 3D ice accretion simulation model. Development of an improved simulation model for SLD conditions. CFD modeling of stall behavior for ice-contaminated wings/tails. Computational methods for simulation of stability and control parameters. Analysis of thermal ice protection system performance. Quantification of 3D ice shape geometric characteristics Development of accurate ground-based simulation of SLD conditions. Development of scaling methods for SLD conditions. Development of advanced diagnostic techniques for assessment of tunnel cloud conditions. Identification of critical ice shapes for aerodynamic performance degradation. Aerodynamic scaling issues associated with testing scale model ice shape geometries. Development of altitude scaling methods for thermal ice protections systems. Development of accurate parameter identification methods. Measurement of stability and control parameters for an ice-contaminated swept wing aircraft. Creation of control law modifications to prevent loss of control during icing encounters. 3D ice shape geometries. Collection efficiency data for ice shape geometries. SLD ice shape data, in-flight and ground-based, for simulation verification. Aerodynamic performance data for 3D geometries and various icing conditions. Stability and control parameter data for iced aircraft configurations. Thermal ice protection system data for simulation validation.

  4. An Examination of Parametric and Nonparametric Dimensionality Assessment Methods with Exploratory and Confirmatory Mode

    ERIC Educational Resources Information Center

    Kogar, Hakan

    2018-01-01

    The aim of the present research study was to compare the findings from the nonparametric MSA, DIMTEST and DETECT and the parametric dimensionality determining methods in various simulation conditions by utilizing exploratory and confirmatory methods. For this purpose, various simulation conditions were established based on number of dimensions,…

  5. Multiple point statistical simulation using uncertain (soft) conditional data

    NASA Astrophysics Data System (ADS)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  6. The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline

    NASA Astrophysics Data System (ADS)

    Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji

    2018-02-01

    This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.

  7. A data driven method for estimation of B(avail) and appK(D) using a single injection protocol with [¹¹C]raclopride in the mouse.

    PubMed

    Wimberley, Catriona J; Fischer, Kristina; Reilhac, Anthonin; Pichler, Bernd J; Gregoire, Marie Claude

    2014-10-01

    The partial saturation approach (PSA) is a simple, single injection experimental protocol that will estimate both B(avail) and appK(D) without the use of blood sampling. This makes it ideal for use in longitudinal studies of neurodegenerative diseases in the rodent. The aim of this study was to increase the range and applicability of the PSA by developing a data driven strategy for determining reliable regional estimates of receptor density (B(avail)) and in vivo affinity (1/appK(D)), and validate the strategy using a simulation model. The data driven method uses a time window guided by the dynamic equilibrium state of the system as opposed to using a static time window. To test the method, simulations of partial saturation experiments were generated and validated against experimental data. The experimental conditions simulated included a range of receptor occupancy levels and three different B(avail) and appK(D) values to mimic diseases states. Also the effect of using a reference region and typical PET noise on the stability and accuracy of the estimates was investigated. The investigations showed that the parameter estimates in a simulated healthy mouse, using the data driven method were within 10±30% of the simulated input for the range of occupancy levels simulated. Throughout all experimental conditions simulated, the accuracy and robustness of the estimates using the data driven method were much improved upon the typical method of using a static time window, especially at low receptor occupancy levels. Introducing a reference region caused a bias of approximately 10% over the range of occupancy levels. Based on extensive simulated experimental conditions, it was shown the data driven method provides accurate and precise estimates of B(avail) and appK(D) for a broader range of conditions compared to the original method. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Analysis of three-phase equilibrium conditions for methane hydrate by isometric-isothermal molecular dynamics simulations.

    PubMed

    Yuhara, Daisuke; Brumby, Paul E; Wu, David T; Sum, Amadeu K; Yasuoka, Kenji

    2018-05-14

    To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems.

  9. Analysis of three-phase equilibrium conditions for methane hydrate by isometric-isothermal molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Yuhara, Daisuke; Brumby, Paul E.; Wu, David T.; Sum, Amadeu K.; Yasuoka, Kenji

    2018-05-01

    To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems.

  10. Simulated tempering based on global balance or detailed balance conditions: Suwa-Todo, heat bath, and Metropolis algorithms.

    PubMed

    Mori, Yoshiharu; Okumura, Hisashi

    2015-12-05

    Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm. © 2015 Wiley Periodicals, Inc.

  11. Mathematical simulation of power conditioning systems. Volume 1: Simulation of elementary units. Report on simulation methodology

    NASA Technical Reports Server (NTRS)

    Prajous, R.; Mazankine, J.; Ippolito, J. C.

    1978-01-01

    Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.

  12. Boundary conditions for simulating large SAW devices using ANSYS.

    PubMed

    Peng, Dasong; Yu, Fengqi; Hu, Jian; Li, Peng

    2010-08-01

    In this report, we propose improved substrate left and right boundary conditions for simulating SAW devices using ANSYS. Compared with the previous methods, the proposed method can greatly reduce computation time. Furthermore, the longer the distance from the first reflector to the last one, the more computation time can be reduced. To verify the proposed method, a design example is presented with device center frequency 971.14 MHz.

  13. Broadband impedance boundary conditions for the simulation of sound propagation in the time domain.

    PubMed

    Bin, Jonghoon; Yousuff Hussaini, M; Lee, Soogab

    2009-02-01

    An accurate and practical surface impedance boundary condition in the time domain has been developed for application to broadband-frequency simulation in aeroacoustic problems. To show the capability of this method, two kinds of numerical simulations are performed and compared with the analytical/experimental results: one is acoustic wave reflection by a monopole source over an impedance surface and the other is acoustic wave propagation in a duct with a finite impedance wall. Both single-frequency and broadband-frequency simulations are performed within the framework of linearized Euler equations. A high-order dispersion-relation-preserving finite-difference method and a low-dissipation, low-dispersion Runge-Kutta method are used for spatial discretization and time integration, respectively. The results show excellent agreement with the analytical/experimental results at various frequencies. The method accurately predicts both the amplitude and the phase of acoustic pressure and ensures the well-posedness of the broadband time-domain impedance boundary condition.

  14. Preliminary Computational Fluid Dynamics (CFD) Simulation of EIIB Push Barge in Shallow Water

    NASA Astrophysics Data System (ADS)

    Beneš, Petr; Kollárik, Róbert

    2011-12-01

    This study presents preliminary CFD simulation of EIIb push barge in inland conditions using CFD software Ansys Fluent. The RANSE (Reynolds Averaged Navier-Stokes Equation) methods are used for the viscosity solution of turbulent flow around the ship hull. Different RANSE methods are used for the comparison of their results in ship resistance calculations, for selecting the appropriate and removing inappropriate methods. This study further familiarizes on the creation of geometrical model which considers exact water depth to vessel draft ratio in shallow water conditions, grid generation, setting mathematical model in Fluent and evaluation of the simulations results.

  15. Filter method without boundary-value condition for simultaneous calculation of eigenfunction and eigenvalue of a stationary Schrödinger equation on a grid.

    PubMed

    Nurhuda, M; Rouf, A

    2017-09-01

    The paper presents a method for simultaneous computation of eigenfunction and eigenvalue of the stationary Schrödinger equation on a grid, without imposing boundary-value condition. The method is based on the filter operator, which selects the eigenfunction from wave packet at the rate comparable to δ function. The efficacy and reliability of the method are demonstrated by comparing the simulation results with analytical or numerical solutions obtained by using other methods for various boundary-value conditions. It is found that the method is robust, accurate, and reliable. Further prospect of filter method for simulation of the Schrödinger equation in higher-dimensional space will also be highlighted.

  16. Wall interference and boundary simulation in a transonic wind tunnel with a discretely slotted test section

    NASA Technical Reports Server (NTRS)

    Al-Saadi, Jassim A.

    1993-01-01

    A computational simulation of a transonic wind tunnel test section with longitudinally slotted walls is developed and described herein. The nonlinear slot model includes dynamic pressure effects and a plenum pressure constraint, and each slot is treated individually. The solution is performed using a finite-difference method that solves an extended transonic small disturbance equation. The walls serve as the outer boundary conditions in the relaxation technique, and an interaction procedure is used at the slotted walls. Measured boundary pressures are not required to establish the wall conditions but are currently used to assess the accuracy of the simulation. This method can also calculate a free-air solution as well as solutions that employ the classical homogeneous wall conditions. The simulation is used to examine two commercial transport aircraft models at a supercritical Mach number for zero-lift and cruise conditions. Good agreement between measured and calculated wall pressures is obtained for the model geometries and flow conditions examined herein. Some localized disagreement is noted, which is attributed to improper simulation of viscous effects in the slots.

  17. A training image evaluation and selection method based on minimum data event distance for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke

    2017-07-01

    A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.

  18. Backward-stochastic-differential-equation approach to modeling of gene expression

    NASA Astrophysics Data System (ADS)

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F.; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  19. Backward-stochastic-differential-equation approach to modeling of gene expression.

    PubMed

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  20. Data-driven train set crash dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2017-02-01

    Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.

  1. Estimation of stereovision in conditions of blurring simulation

    NASA Astrophysics Data System (ADS)

    Krumina, Gunta; Ozolinsh, Maris; Lacis, Ivazs; Lyakhovetskii, Vsevolod

    2005-08-01

    The aim of this study was to evaluate the simulation of eye pathologies, such as amblyopia and cataracts, to estimate the stereovision in artificial conditions, and to compare the results on the stereothreshold obtained in artificial and real- pathologic conditions. Characteristic of the above-mentioned real-life forms of a reduced vision is a blurred image in one of the eyes. The blurring was simulated by (i) defocusing, (ii) blurred stimuli on the screen, and (iii) occluding of an eye with PLZT or PDLC plates. When comparing the methods, two parameters were used: the subject's visual acuity and the modulation depth of the image. The eye occluder method appeared to systematically provide higher stereothreshold values than the rest of the methods. The PLZT and PDLC plates scattered more in the blue and decreased the contrast of the stimuli when the blurring degree was increased. In the eye occluder method, the stereothreshold increased faster than in the defocusation and monitor stimuli methods when the visual acuity difference was higher than 0.4. It has been shown that the PLZT and PDLC plates are good optical phantoms for the simulation of a cataract, while the defocusation and monitor stimuli methods are more suitable for amblyopia.

  2. Multi-resolution MPS method

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayuki; Cardoso, Rui; Bahai, Hamid

    2018-04-01

    In this work, the Moving Particle Semi-implicit (MPS) method is enhanced for multi-resolution problems with different resolutions at different parts of the domain utilising a particle splitting algorithm for the finer resolution and a particle merging algorithm for the coarser resolution. The Least Square MPS (LSMPS) method is used for higher stability and accuracy. Novel boundary conditions are developed for the treatment of wall and pressure boundaries for the Multi-Resolution LSMPS method. A wall is represented by polygons for effective simulations of fluid flows with complex wall geometries and the pressure boundary condition allows arbitrary inflow and outflow, making the method easier to be used in flow simulations of channel flows. By conducting simulations of channel flows and free surface flows, the accuracy of the proposed method was verified.

  3. Effect of two sweating simulation methods on clothing evaporative resistance in a so-called isothermal condition.

    PubMed

    Lu, Yehu; Wang, Faming; Peng, Hui

    2016-07-01

    The effect of sweating simulation methods on clothing evaporative resistance was investigated in a so-called isothermal condition (T manikin  = T a  = T r ). Two sweating simulation methods, namely, the pre-wetted fabric "skin" (PW) and the water supplied sweating (WS), were applied to determine clothing evaporative resistance on a "Newton" thermal manikin. Results indicated that the clothing evaporative resistance determined by the WS method was significantly lower than that measured by the PW method. In addition, the evaporative resistances measured by the two methods were correlated and exhibited a linear relationship. Validation experiments demonstrated that the empirical regression equation showed highly acceptable estimations. The study contributes to improving the accuracy of measurements of clothing evaporative resistance by means of a sweating manikin.

  4. Analysis of mixed model in gear transmission based on ADAMS

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2012-09-01

    The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.

  5. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  6. Simulating the component counts of combinatorial structures.

    PubMed

    Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon

    2018-02-09

    This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.

  7. Simulations of string vibrations with boundary conditions of third kind using the functional transformation method

    NASA Astrophysics Data System (ADS)

    Trautmann, L.; Petrausch, S.; Bauer, M.

    2005-09-01

    The functional transformation method (FTM) is an established mathematical method for accurate simulation of multidimensional physical systems from various fields of science, including optics, heat and mass transfer, electrical engineering, and acoustics. It is a frequency-domain method based on the decomposition into eigenvectors and eigenfrequencies of the underlying physical problem. In this article, the FTM is applied to real-time simulations of vibrating strings which are ideally fixed at one end while the fixing at the other end is modeled by a frequency-dependent input impedance. Thus, boundary conditions of third kind are applied to the model at the end fixed with the input impedance. It is shown that accurate and stable simulations are achieved with nearly the same computational cost as with strings ideally fixed at both ends.

  8. Development of Simulation Methods in the Gibbs Ensemble to Predict Polymer-Solvent Phase Equilibria

    NASA Astrophysics Data System (ADS)

    Gartner, Thomas; Epps, Thomas; Jayaraman, Arthi

    Solvent vapor annealing (SVA) of polymer thin films is a promising method for post-deposition polymer film morphology control. The large number of important parameters relevant to SVA (polymer, solvent, and substrate chemistries, incoming film condition, annealing and solvent evaporation conditions) makes systematic experimental study of SVA a time-consuming endeavor, motivating the application of simulation and theory to the SVA system to provide both mechanistic insight and scans of this wide parameter space. However, to rigorously treat the phase equilibrium between polymer film and solvent vapor while still probing the dynamics of SVA, new simulation methods must be developed. In this presentation, we compare two methods to study polymer-solvent phase equilibrium-Gibbs Ensemble Molecular Dynamics (GEMD) and Hybrid Monte Carlo/Molecular Dynamics (Hybrid MC/MD). Liquid-vapor equilibrium results are presented for the Lennard Jones fluid and for coarse-grained polymer-solvent systems relevant to SVA. We found that the Hybrid MC/MD method is more stable and consistent than GEMD, but GEMD has significant advantages in computational efficiency. We propose that Hybrid MC/MD simulations be used for unfamiliar systems in certain choice conditions, followed by much faster GEMD simulations to map out the remainder of the phase window.

  9. Simulating Ordinal Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Barbiero, Alessandro

    2012-01-01

    The increasing use of ordinal variables in different fields has led to the introduction of new statistical methods for their analysis. The performance of these methods needs to be investigated under a number of experimental conditions. Procedures to simulate from ordinal variables are then required. In this article, we deal with simulation from…

  10. Estimating rare events in biochemical systems using conditional sampling.

    PubMed

    Sundar, V S

    2017-01-28

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  11. Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo

    NASA Astrophysics Data System (ADS)

    Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu

    2005-04-01

    We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.

  12. Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions

    NASA Technical Reports Server (NTRS)

    Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William

    2005-01-01

    As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.

  13. Numerical simulation of groundwater flow in strongly anisotropic aquifers using multiple-point flux approximation method

    NASA Astrophysics Data System (ADS)

    Lin, S. T.; Liou, T. S.

    2017-12-01

    Numerical simulation of groundwater flow in anisotropic aquifers usually suffers from the lack of accuracy of calculating groundwater flux across grid blocks. Conventional two-point flux approximation (TPFA) can only obtain the flux normal to the grid interface but completely neglects the one parallel to it. Furthermore, the hydraulic gradient in a grid block estimated from TPFA can only poorly represent the hydraulic condition near the intersection of grid blocks. These disadvantages are further exacerbated when the principal axes of hydraulic conductivity, global coordinate system, and grid boundary are not parallel to one another. In order to refine the estimation the in-grid hydraulic gradient, several multiple-point flux approximation (MPFA) methods have been developed for two-dimensional groundwater flow simulations. For example, the MPFA-O method uses the hydraulic head at the junction node as an auxiliary variable which is then eliminated using the head and flux continuity conditions. In this study, a three-dimensional MPFA method will be developed for numerical simulation of groundwater flow in three-dimensional and strongly anisotropic aquifers. This new MPFA method first discretizes the simulation domain into hexahedrons. Each hexahedron is further decomposed into a certain number of tetrahedrons. The 2D MPFA-O method is then extended to these tetrahedrons, using the unknown head at the intersection of hexahedrons as an auxiliary variable along with the head and flux continuity conditions to solve for the head at the center of each hexahedron. Numerical simulations using this new MPFA method have been successfully compared with those obtained from a modified version of TOUGH2.

  14. Numerical Simulation of Two Dimensional Flows in Yazidang Reservoir

    NASA Astrophysics Data System (ADS)

    Huang, Lingxiao; Liu, Libo; Sun, Xuehong; Zheng, Lanxiang; Jing, Hefang; Zhang, Xuande; Li, Chunguang

    2018-01-01

    This paper studied the problem of water flow in the Yazid Ang reservoir. It built 2-D RNG turbulent model, rated the boundary conditions, used the finite volume method to discrete equations and divided the grid by the advancing-front method. It simulated the two conditions of reservoir flow field, compared the average vertical velocity of the simulated value and the measured value nearby the water inlet and the water intake. The results showed that the mathematical model could be applied to the similar industrial water reservoir.

  15. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  16. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  17. Numerical simulation of pressure fluctuation in 1000MW Francis turbine under small opening condition

    NASA Astrophysics Data System (ADS)

    Gong, R. Z.; Wang, H. G.; Yao, Y.; Shu, L. F.; Huang, Y. J.

    2012-11-01

    In order to study the cause of abnormal vibration in large Francis turbine under small opening condition, CFD method was adopted to analyze the flow filed and pressure fluctuation. Numerical simulation was performed on the commercial CFD code Ansys FLUENT 12, using DES method. After an effective validation of the computation result, the flow behaviour of internal flow field under small opening condition is analyzed. Pressure fluctuation in different working mode is obtained by unsteady CFD simulation, and results is compared to study its change. Radial force fluctuation is also analyzed. The result shows that the unstable flow under small opening condition leads to an increase of turbine instability in reverse pump mode, and is one possible reason of the abnormal oscillation.

  18. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    NASA Astrophysics Data System (ADS)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  19. Galilean-invariant algorithm coupling immersed moving boundary conditions and Lees-Edwards boundary conditions

    NASA Astrophysics Data System (ADS)

    Zhou, Guofeng; Wang, Limin; Wang, Xiaowei; Ge, Wei

    2011-12-01

    Many investigators have coupled the Lees-Edwards boundary conditions (LEBCs) and suspension methods in the framework of the lattice Boltzmann method to study the pure bulk properties of particle-fluid suspensions. However, these suspension methods are all link-based and are more or less exposed to the disadvantages of violating Galilean invariance. In this paper, we have coupled LEBCs with a node-based suspension method, which is demonstrated to be Galilean invariant in benchmark simulations. We use the coupled algorithm to predict the viscosity of a particle-fluid suspension at very low Reynolds number, and the simulation results are in good agreement with the semiempirical Krieger-Dougherty formula.

  20. Simulation of Earth textures by conditional image quilting

    NASA Astrophysics Data System (ADS)

    Mahmud, K.; Mariethoz, G.; Caers, J.; Tahmasebi, P.; Baker, A.

    2014-04-01

    Training image-based approaches for stochastic simulations have recently gained attention in surface and subsurface hydrology. This family of methods allows the creation of multiple realizations of a study domain, with a spatial continuity based on a training image (TI) that contains the variability, connectivity, and structural properties deemed realistic. A major drawback of these methods is their computational and/or memory cost, making certain applications challenging. It was found that similar methods, also based on training images or exemplars, have been proposed in computer graphics. One such method, image quilting (IQ), is introduced in this paper and adapted for hydrogeological applications. The main difficulty is that Image Quilting was originally not designed to produce conditional simulations and was restricted to 2-D images. In this paper, the original method developed in computer graphics has been modified to accommodate conditioning data and 3-D problems. This new conditional image quilting method (CIQ) is patch based, does not require constructing a pattern databases, and can be used with both categorical and continuous training images. The main concept is to optimally cut the patches such that they overlap with minimum discontinuity. The optimal cut is determined using a dynamic programming algorithm. Conditioning is accomplished by prior selection of patches that are compatible with the conditioning data. The performance of CIQ is tested for a variety of hydrogeological test cases. The results, when compared with previous multiple-point statistics (MPS) methods, indicate an improvement in CPU time by a factor of at least 50.

  1. First-principles simulations of heat transport

    NASA Astrophysics Data System (ADS)

    Puligheddu, Marcello; Gygi, Francois; Galli, Giulia

    2017-11-01

    Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.

  2. specsim: A Fortran-77 program for conditional spectral simulation in 3D

    NASA Astrophysics Data System (ADS)

    Yao, Tingting

    1998-12-01

    A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.

  3. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  4. A simplified counter diffusion method combined with a 1D simulation program for optimizing crystallization conditions.

    PubMed

    Tanaka, Hiroaki; Inaka, Koji; Sugiyama, Shigeru; Takahashi, Sachiko; Sano, Satoshi; Sato, Masaru; Yoshitomi, Susumu

    2004-01-01

    We developed a new protein crystallization method has been developed using a simplified counter-diffusion method for optimizing crystallization condition. It is composed of only a single capillary, the gel in the silicon tube and the screw-top test tube, which are readily available in the laboratory. The one capillary can continuously scan a wide range of crystallization conditions (combination of the concentrations of the precipitant and the protein) unless crystallization occurs, which means that it corresponds to many drops in the vapor-diffusion method. The amount of the precipitant and the protein solutions can be much less than in conventional methods. In this study, lysozyme and alpha-amylase were used as model proteins for demonstrating the efficiency of this method. In addition, one-dimensional (1-D) simulations of the crystal growth were performed based on the 1-D diffusion model. The optimized conditions can be applied to the initial crystallization conditions for both other counter-diffusion methods with the Granada Crystallization Box (GCB) and for the vapor-diffusion method after some modification.

  5. Intercomparison of methods of coupling between convection and large-scale circulation. 1. Comparison over uniform surface conditions

    DOE PAGES

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...

    2015-10-24

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  6. Some Dimensions of Simulation.

    ERIC Educational Resources Information Center

    Beck, Isabel; Monroe, Bruce

    Beginning with definitions of "simulation" (a methodology for testing alternative decisions under hypothetical conditions), this paper focuses on the use of simulation as an instructional method, pointing out the relationships and differences between role playing, games, and simulation. The term "simulation games" is explored with an analysis of…

  7. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    NASA Astrophysics Data System (ADS)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  8. Folding free energy surfaces of three small proteins under crowding: validation of the postprocessing method by direct simulation

    NASA Astrophysics Data System (ADS)

    Qin, Sanbo; Mittal, Jeetain; Zhou, Huan-Xiang

    2013-08-01

    We have developed a ‘postprocessing’ method for modeling biochemical processes such as protein folding under crowded conditions (Qin and Zhou 2009 Biophys. J. 97 12-19). In contrast to the direct simulation approach, in which the protein undergoing folding is simulated along with crowders, the postprocessing method requires only the folding simulation without crowders. The influence of the crowders is then obtained by taking conformations from the crowder-free simulation and calculating the free energies of transferring to the crowders. This postprocessing yields the folding free energy surface of the protein under crowding. Here the postprocessing results for the folding of three small proteins under ‘repulsive’ crowding are validated by those obtained previously by the direct simulation approach (Mittal and Best 2010 Biophys. J. 98 315-20). This validation confirms the accuracy of the postprocessing approach and highlights its distinct advantages in modeling biochemical processes under cell-like crowded conditions, such as enabling an atomistic representation of the test proteins.

  9. Methods for Stem Cell Production and Therapy

    NASA Technical Reports Server (NTRS)

    Valluri, Jagan V. (Inventor); Claudio, Pier Paolo (Inventor)

    2015-01-01

    The present invention relates to methods for rapidly expanding a stem cell population with or without culture supplements in simulated microgravity conditions. The present invention relates to methods for rapidly increasing the life span of stem cell populations without culture supplements in simulated microgravity conditions. The present invention also relates to methods for increasing the sensitivity of cancer stem cells to chemotherapeutic agents by culturing the cancer stem cells under microgravity conditions and in the presence of omega-3 fatty acids. The methods of the present invention can also be used to proliferate cancer cells by culturing them in the presence of omega-3 fatty acids. The present invention also relates to methods for testing the sensitivity of cancer cells and cancer stem cells to chemotherapeutic agents by culturing the cancer cells and cancer stem cells under microgravity conditions. The methods of the present invention can also be used to produce tissue for use in transplantation by culturing stem cells or cancer stem cells under microgravity conditions. The methods of the present invention can also be used to produce cellular factors and growth factors by culturing stem cells or cancer stem cells under microgravity conditions. The methods of the present invention can also be used to produce cellular factors and growth factors to promote differentiation of cancer stem cells under microgravity conditions.

  10. Does mental illness stigma contribute to adolescent standardized patients' discomfort with simulations of mental illness and adverse psychosocial experiences?

    PubMed

    Hanson, Mark D; Johnson, Samantha; Niec, Anne; Pietrantonio, Anna Marie; High, Bradley; MacMillan, Harriet; Eva, Kevin W

    2008-01-01

    Adolescent mental illness stigma-related factors may contribute to adolescent standardized patients' (ASP) discomfort with simulations of psychiatric conditions/adverse psychosocial experiences. Paradoxically, however, ASP involvement may provide a stigma-reduction strategy. This article reports an investigation of this hypothetical association between simulation discomfort and mental illness stigma. ASPs were randomly assigned to one of two simulation conditions: one was associated with mental illness stigma and one was not. ASP training methods included carefully written case simulations, educational materials, and active teaching methods. After training, ASPs completed the adapted Project Role Questionnaire to rate anticipated role discomfort with hypothetical adolescent psychiatric conditions/adverse psychosocial experiences and to respond to open-ended questions regarding this discomfort. A mixed design ANOVA was used to compare comfort levels across simulation conditions. Narrative responses to an open-ended question were reviewed for relevant themes. Twenty-four ASPs participated. A significant effect of simulation was observed, indicating that ASPs participating in the simulation associated with mental illness stigma anticipated greater comfort with portraying subsequent stigma-associated roles than did ASPs in the simulation not associated with stigma. ASPs' narrative responses regarding their reasons for anticipating discomfort focused upon the role of knowledge-related factors. ASPs' work with a psychiatric case simulation was associated with greater anticipated comfort with hypothetical simulations of psychiatric/adverse psychosocial conditions in comparison to ASPs lacking a similar work experience. The ASPs provided explanations for this anticipated discomfort that were suggestive of stigma-related knowledge factors. This preliminary research suggests an association between ASP anticipated role discomfort and mental illness stigma, and that ASP work may contribute to stigma reduction.

  11. Numeric simulation model for long-term orthodontic tooth movement with contact boundary conditions using the finite element method.

    PubMed

    Hamanaka, Ryo; Yamaoka, Satoshi; Anh, Tuan Nguyen; Tominaga, Jun-Ya; Koga, Yoshiyuki; Yoshida, Noriaki

    2017-11-01

    Although many attempts have been made to simulate orthodontic tooth movement using the finite element method, most were limited to analyses of the initial displacement in the periodontal ligament and were insufficient to evaluate the effect of orthodontic appliances on long-term tooth movement. Numeric simulation of long-term tooth movement was performed in some studies; however, neither the play between the brackets and archwire nor the interproximal contact forces were considered. The objectives of this study were to simulate long-term orthodontic tooth movement with the edgewise appliance by incorporating those contact conditions into the finite element model and to determine the force system when the space is closed with sliding mechanics. We constructed a 3-dimensional model of maxillary dentition with 0.022-in brackets and 0.019 × 0.025-in archwire. Forces of 100 cN simulating sliding mechanics were applied. The simulation was accomplished on the assumption that bone remodeling correlates with the initial tooth displacement. This method could successfully represent the changes in the moment-to-force ratio: the tooth movement pattern during space closure. We developed a novel method that could simulate the long-term orthodontic tooth movement and accurately determine the force system in the course of time by incorporating contact boundary conditions into finite element analysis. It was also suggested that friction is progressively increased during space closure in sliding mechanics. Copyright © 2017. Published by Elsevier Inc.

  12. Design and landing dynamic analysis of reusable landing leg for a near-space manned capsule

    NASA Astrophysics Data System (ADS)

    Yue, Shuai; Nie, Hong; Zhang, Ming; Wei, Xiaohui; Gan, Shengyong

    2018-06-01

    To improve the landing performance of a near-space manned capsule under various landing conditions, a novel landing system is designed that employs double chamber and single chamber dampers in the primary and auxiliary struts, respectively. A dynamic model of the landing system is established, and the damper parameters are determined by employing the design method. A single-leg drop test with different initial pitch angles is then conducted to compare and validate the simulation model. Based on the validated simulation model, seven critical landing conditions regarding nine crucial landing responses are found by combining the radial basis function (RBF) surrogate model and adaptive simulated annealing (ASA) optimization method. Subsequently, the adaptability of the landing system under critical landing conditions is analyzed. The results show that the simulation effectively results match the test results, which validates the accuracy of the dynamic model. In addition, all of the crucial responses under their corresponding critical landing conditions satisfy the design specifications, demonstrating the feasibility of the landing system.

  13. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  14. New method of processing heat treatment experiments with numerical simulation support

    NASA Astrophysics Data System (ADS)

    Kik, T.; Moravec, J.; Novakova, I.

    2017-08-01

    In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.

  15. Simulation of car movement along circular path

    NASA Astrophysics Data System (ADS)

    Fedotov, A. I.; Tikhov-Tinnikov, D. A.; Ovchinnikova, N. I.; Lysenko, A. V.

    2017-10-01

    Under operating conditions, suspension system performance changes which negatively affects vehicle stability and handling. The paper aims to simulate the impact of changes in suspension system performance on vehicle stability and handling. Methods. The paper describes monitoring of suspension system performance, testing of vehicle stability and handling, analyzes methods of suspension system performance monitoring under operating conditions. The mathematical model of a car movement along a circular path was developed. Mathematical tools describing a circular movement of a vehicle along a horizontal road were developed. Turning car movements were simulated. Calculation and experiment results were compared. Simulation proves the applicability of a mathematical model for assessment of the impact of suspension system performance on vehicle stability and handling.

  16. Construction and simulation of a novel continuous traffic flow model

    NASA Astrophysics Data System (ADS)

    Hwang, Yao-Hsin; Yu, Jui-Ling

    2017-12-01

    In this paper, we aim to propose a novel mathematical model for traffic flow and apply a newly developed characteristic particle method to solve the associate governing equations. As compared with the existing non-equilibrium higher-order traffic flow models, the present one is put forward to satisfy the following three conditions: Preserve the equilibrium state in the smooth region. Yield an anisotropic propagation of traffic flow information. Expressed with a conservation law form for traffic momentum. These conditions will ensure a more practical simulation in traffic flow physics: The current traffic will not be influenced by the condition in the behind and result in unambiguous condition across a traffic shock. Through analyses of characteristics, stability condition and steady-state solution adherent to the equation system, it is shown that the proposed model actually conform to these conditions. Furthermore, this model can be cast into its characteristic form which, incorporated with the Rankine-Hugoniot relation, is appropriate to be simulated by the characteristic particle method to obtain accurate computational results.

  17. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  18. A path-level exact parallelization strategy for sequential simulation

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  19. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  20. Modeling of shock wave propagation in large amplitude ultrasound.

    PubMed

    Pinton, Gianmarco F; Trahey, Gregg E

    2008-01-01

    The Rankine-Hugoniot relation for shock wave propagation describes the shock speed of a nonlinear wave. This paper investigates time-domain numerical methods that solve the nonlinear parabolic wave equation, or the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and the conditions they require to satisfy the Rankine-Hugoniot relation. Two numerical methods commonly used in hyperbolic conservation laws are adapted to solve the KZK equation: Godunov's method and the monotonic upwind scheme for conservation laws (MUSCL). It is shown that they satisfy the Rankine-Hugoniot relation regardless of attenuation. These two methods are compared with the current implicit solution based method. When the attenuation is small, such as in water, the current method requires a degree of grid refinement that is computationally impractical. All three numerical methods are compared in simulations for lithotripters and high intensity focused ultrasound (HIFU) where the attenuation is small compared to the nonlinearity because much of the propagation occurs in water. The simulations are performed on grid sizes that are consistent with present-day computational resources but are not sufficiently refined for the current method to satisfy the Rankine-Hugoniot condition. It is shown that satisfying the Rankine-Hugoniot conditions has a significant impact on metrics relevant to lithotripsy (such as peak pressures) and HIFU (intensity). Because the Godunov and MUSCL schemes satisfy the Rankine-Hugoniot conditions on coarse grids, they are particularly advantageous for three-dimensional simulations.

  1. Role of Boundary Conditions in Monte Carlo Simulation of MEMS Devices

    NASA Technical Reports Server (NTRS)

    Nance, Robert P.; Hash, David B.; Hassan, H. A.

    1997-01-01

    A study is made of the issues surrounding prediction of microchannel flows using the direct simulation Monte Carlo method. This investigation includes the introduction and use of new inflow and outflow boundary conditions suitable for subsonic flows. A series of test simulations for a moderate-size microchannel indicates that a high degree of grid under-resolution in the streamwise direction may be tolerated without loss of accuracy. In addition, the results demonstrate the importance of physically correct boundary conditions, as well as possibilities for reducing the time associated with the transient phase of a simulation. These results imply that simulations of longer ducts may be more feasible than previously envisioned.

  2. Relativistic initial conditions for N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fidler, Christian; Tram, Thomas; Crittenden, Robert

    2017-06-01

    Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less

  3. Helical gears with circular arc teeth: Generation, geometry, precision and adjustment to errors, computer aided simulation of conditions of meshing and bearing contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsay, Chung-Biau

    1987-01-01

    The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.

  4. An Implementation of Hydrostatic Boundary Conditions for Variable Density Lattice Boltzmann Methods

    NASA Astrophysics Data System (ADS)

    Bardsley, K. J.; Thorne, D. T.; Lee, J. S.; Sukop, M. C.

    2006-12-01

    Lattice Boltzmann Methods (LBMs) have been under development for the last two decades and have become another capable numerical method for simulating fluid flow. Recent advances in lattice Boltzmann applications involve simulation of density-dependent fluid flow in closed (Dixit and Babu, 2006; D'Orazio et al., 2004) or periodic (Guo and Zhao, 2005) domains. However, standard pressure boundary conditions (BCs) are incompatible with concentration-dependent density flow simulations that use a body force for gravity. An implementation of hydrostatic BCs for use under these conditions is proposed here. The basis of this new implementation is an additional term in the pressure BC. It is derived to account for the incorporation of gravity as a body force and the effect of varying concentration in the fluid. The hydrostatic BC expands the potential of density-dependent LBM to simulate domains with boundaries other than the closed or periodic boundaries that have appeared in previous literature on LBM simulations. With this new implementation, LBM will be able to simulate complex concentration-dependent density flows, such as salt water intrusion in the classic Henry and Henry-Hilleke problems. This is demonstrated using various examples, beginning with a closed box system, and ending with a system containing two solid walls, one velocity boundary and one pressure boundary, as in the Henry problem. References Dixit, H. N., V. Babu, (2006), Simulation of high Rayleigh number natural convection in a square cavity using the lattice Boltzmann method, Int. J. Heat Mass Transfer, 49, 727-739. D'Orazio, A., M. Corcione, G.P. Celata, (2004), Application to natural convection enclosed flows of a lattice Boltzmann BGK model coupled with a general purpose thermal boundary conditions, Int. J. Thermal Sci., 43, 575-586. Gou, Z., T.S. Zhao, (2005), Lattice Boltzmann simulation of natural convection with temperature-dependant viscosity in a porous cavity, Numerical Heat Transfer, Part B, 47, 157-177.

  5. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  6. Using sequential self-calibration method to identify conductivity distribution: Conditioning on tracer test data

    USGS Publications Warehouse

    Hu, B.X.; He, C.

    2008-01-01

    An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.

  7. Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences.

    PubMed

    Voelz, David G; Roggemann, Michael C

    2009-11-10

    Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.

  8. Compactified cosmological simulations of the infinite universe

    NASA Astrophysics Data System (ADS)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-06-01

    We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.

  9. Detecting Unsteady Blade Row Interaction in a Francis Turbine using a Phase-Lag Boundary Condition

    NASA Astrophysics Data System (ADS)

    Wouden, Alex; Cimbala, John; Lewis, Bryan

    2013-11-01

    For CFD simulations in turbomachinery, methods are typically used to reduce the computational cost. For example, the standard periodic assumption reduces the underlying mesh to a single blade passage in axisymmetric applications. If the simulation includes only a single array of blades with an uniform inlet condition, this assumption is adequate. However, to compute the interaction between successive blade rows of differing periodicity in an unsteady simulation, the periodic assumption breaks down and may produce inaccurate results. As a viable alternative the phase-lag boundary condition assumes that the periodicity includes a temporal component which, if considered, allows for a single passage to be modeled per blade row irrespective of differing periodicity. Prominently used in compressible CFD codes for the analysis of gas turbines/compressors, the phase-lag boundary condition is adapted to analyze the interaction between the guide vanes and rotor blades in an incompressible simulation of the 1989 GAMM Workshop Francis turbine using OpenFOAM. The implementation is based on the ``direct-storage'' method proposed in 1977 by Erdos and Alzner. The phase-lag simulation is compared with available data from the GAMM workshop as well as a full-wheel simulation. Funding provided by DOE Award number: DE-EE0002667.

  10. Lattice Boltzmann method for simulating the viscous flow in large distensible blood vessels

    NASA Astrophysics Data System (ADS)

    Fang, Haiping; Wang, Zuowei; Lin, Zhifang; Liu, Muren

    2002-05-01

    A lattice Boltzmann method for simulating the viscous flow in large distensible blood vessels is presented by introducing a boundary condition for elastic and moving boundaries. The mass conservation for the boundary condition is tested in detail. The viscous flow in elastic vessels is simulated with a pressure-radius relationship similar to that of the pulmonary blood vessels. The numerical results for steady flow agree with the analytical prediction to very high accuracy, and the simulation results for pulsatile flow are comparable with those of the aortic flows observed experimentally. The model is expected to find many applications for studying blood flows in large distensible arteries, especially in those suffering from atherosclerosis, stenosis, aneurysm, etc.

  11. Simulation Research on Vehicle Active Suspension Controller Based on G1 Method

    NASA Astrophysics Data System (ADS)

    Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui

    2017-09-01

    Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.

  12. Mathematic simulation of soil-vegetation condition and land use structure applying basin approach

    NASA Astrophysics Data System (ADS)

    Mishchenko, Natalia; Shirkin, Leonid; Krasnoshchekov, Alexey

    2016-04-01

    Ecosystems anthropogenic transformation is basically connected to the changes of land use structure and human impact on soil fertility. The Research objective is to simulate the stationary state of river basins ecosystems. Materials and Methods. Basin approach has been applied in the research. Small rivers basins of the Klyazma river have been chosen as our research objects. They are situated in the central part of the Russian plain. The analysis is carried out applying integrated characteristics of ecosystems functioning and mathematic simulation methods. To design mathematic simulator functional simulation methods and principles on the basis of regression, correlation and factor analysis have been applied in the research. Results. Mathematic simulation resulted in defining possible permanent conditions of "phytocenosis-soil" system in coordinates of phytomass, phytoproductivity, humus percentage in soil. Ecosystem productivity is determined not only by vegetation photosynthesis activity but also by the area ratio of forest and meadow phytocenosis. Local maximums attached to certain phytomass areas and humus content in soil have been defined on the basin phytoproductivity distribution diagram. We explain the local maximum by synergetic effect. It appears with the definite ratio of forest and meadow phytocenosis. In this case, utmost values of phytomass for the whole area are higher than just a sum of utmost values of phytomass for the forest and meadow phytocenosis. Efficient correlation of natural forest and meadow phytocenosis has been defined for the Klyazma river. Conclusion. Mathematic simulation methods assist in forecasting the ecosystem conditions under various changes of land use structure. Nowadays overgrowing of the abandoned agricultural lands is very actual for the Russian Federation. Simulation results demonstrate that natural ratio of forest and meadow phytocenosis for the area will restore during agricultural overgrowing.

  13. Conditioning geostatistical simulations of a heterogeneous paleo-fluvial bedrock aquifer using lithologs and pumping tests

    NASA Astrophysics Data System (ADS)

    Niazi, A.; Bentley, L. R.; Hayashi, M.

    2016-12-01

    Geostatistical simulations are used to construct heterogeneous aquifer models. Optimally, such simulations should be conditioned with both lithologic and hydraulic data. We introduce an approach to condition lithologic geostatistical simulations of a paleo-fluvial bedrock aquifer consisting of relatively high permeable sandstone channels embedded in relatively low permeable mudstone using hydraulic data. The hydraulic data consist of two-hour single well pumping tests extracted from the public water well database for a 250-km2 watershed in Alberta, Canada. First, lithologic models of the entire watershed are simulated and conditioned with hard lithological data using transition probability - Markov chain geostatistics (TPROGS). Then, a segment of the simulation around a pumping well is used to populate a flow model (FEFLOW) with either sand or mudstone. The values of the hydraulic conductivity and specific storage of sand and mudstone are then adjusted to minimize the difference between simulated and actual pumping test data using the parameter estimation program PEST. If the simulated pumping test data do not adequately match the measured data, the lithologic model is updated by locally deforming the lithology distribution using the probability perturbation method and the model parameters are again updated with PEST. This procedure is repeated until the simulated and measured data agree within a pre-determined tolerance. The procedure is repeated for each well that has pumping test data. The method creates a local groundwater model that honors both the lithologic model and pumping test data and provides estimates of hydraulic conductivity and specific storage. Eventually, the simulations will be integrated into a watershed-scale groundwater model.

  14. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Kurth, R. E.; Ho, H.

    1991-01-01

    The objective of this program is to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen posts and system ducting. The first approach will consist of using state of the art probabilistic methods to describe the individual loading conditions and combinations of these loading conditions to synthesize the composite load spectra simulation. The second approach will consist of developing coupled models for composite load spectra simulation which combine the deterministic models for composite load dynamic, acoustic, high pressure, and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients will then be determined using advanced probabilistic simulation methods with and without strategically selected experimental data.

  15. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles; Moin, Parviz

    2002-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.

  16. Improving the result of forcasting using reservoir and surface network simulation

    NASA Astrophysics Data System (ADS)

    Hendri, R. S.; Winarta, J.

    2018-01-01

    This study was aimed to get more representative results in production forcasting using integrated simulation in pipeline gathering system of X field. There are 5 main scenarios which consist of the production forecast of the existing condition, work over, and infill drilling. Then, it’s determined the best development scenario. The methods of this study is Integrated Reservoir Simulator and Pipeline Simulator so-calle as Integrated Reservoir and Surface Network Simulation. After well data result from reservoir simulator was then integrated with pipeline networking simulator’s to construct a new schedule, which was input for all simulation procedure. The well design result was done by well modeling simulator then exported into pipeline simulator. Reservoir prediction depends on the minimum value of Tubing Head Pressure (THP) for each well, where the pressure drop on the Gathering Network is not necessary calculated. The same scenario was done also for the single-reservoir simulation. Integration Simulation produces results approaching the actual condition of the reservoir and was confirmed by the THP profile, which difference between those two methods. The difference between integrated simulation compared to single-modeling simulation is 6-9%. The aimed of solving back-pressure problem in pipeline gathering system of X field is achieved.

  17. Collaborative simulation method with spatiotemporal synchronization process control

    NASA Astrophysics Data System (ADS)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  18. Combining Heterogeneous Correlation Matrices: Simulation Analysis of Fixed-Effects Methods

    ERIC Educational Resources Information Center

    Hafdahl, Adam R.

    2008-01-01

    Monte Carlo studies of several fixed-effects methods for combining and comparing correlation matrices have shown that two refinements improve estimation and inference substantially. With rare exception, however, these simulations have involved homogeneous data analyzed using conditional meta-analytic procedures. The present study builds on…

  19. A fluid-solid coupling simulation method for convection heat transfer coefficient considering the under-vehicle condition

    NASA Astrophysics Data System (ADS)

    Tian, C.; Weng, J.; Liu, Y.

    2017-11-01

    The convection heat transfer coefficient is one of the evaluation indexes of the brake disc performance. The method used in this paper to calculate the convection heat transfer coefficient is a fluid-solid coupling simulation method, because the calculation results through the empirical formula method have great differences. The model, including a brake disc, a car body, a bogie and flow field, was built, meshed and simulated in the software FLUENT. The calculation models were K-epsilon Standard model and Energy model. The working condition of the brake disc was considered. The coefficient of various parts can be obtained through the method in this paper. The simulation result shows that, under 160 km/h speed, the radiating ribs have the maximum convection heat transfer coefficient and the value is 129.6W/(m2·K), the average coefficient of the whole disc is 100.4W/(m2·K), the windward of ribs is positive-pressure area and the leeward of ribs is negative-pressure area, the maximum pressure is 2663.53Pa.

  20. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD

    NASA Astrophysics Data System (ADS)

    Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.

    2017-12-01

    We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.

  2. Investigation of Asymmetric Thrust Detection with Demonstration in a Real-Time Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Rinehart, Aidan W.; Sowers, T. Shane; Simon, Donald L.

    2015-01-01

    The purpose of this effort is to develop, demonstrate, and evaluate three asymmetric thrust detection approaches to aid in the reduction of asymmetric thrust-induced aviation accidents. This paper presents the results from that effort and their evaluation in simulation studies, including those from a real-time flight simulation testbed. Asymmetric thrust is recognized as a contributing factor in several Propulsion System Malfunction plus Inappropriate Crew Response (PSM+ICR) aviation accidents. As an improvement over the state-of-the-art, providing annunciation of asymmetric thrust to alert the crew may hold safety benefits. For this, the reliable detection and confirmation of asymmetric thrust conditions is required. For this work, three asymmetric thrust detection methods are presented along with their results obtained through simulation studies. Representative asymmetric thrust conditions are modeled in simulation based on failure scenarios similar to those reported in aviation incident and accident descriptions. These simulated asymmetric thrust scenarios, combined with actual aircraft operational flight data, are then used to conduct a sensitivity study regarding the detection capabilities of the three methods. Additional evaluation results are presented based on pilot-in-the-loop simulation studies conducted in the NASA Glenn Research Center (GRC) flight simulation testbed. Data obtained from this flight simulation facility are used to further evaluate the effectiveness and accuracy of the asymmetric thrust detection approaches. Generally, the asymmetric thrust conditions are correctly detected and confirmed.

  3. Investigation of Asymmetric Thrust Detection with Demonstration in a Real-Time Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy K.; Rinehart, Aidan W.; Sowers, T. Shane; Simon, Donald L.

    2016-01-01

    The purpose of this effort is to develop, demonstrate, and evaluate three asymmetric thrust detection approaches to aid in the reduction of asymmetric thrust-induced aviation accidents. This paper presents the results from that effort and their evaluation in simulation studies, including those from a real-time flight simulation testbed. Asymmetric thrust is recognized as a contributing factor in several Propulsion System Malfunction plus Inappropriate Crew Response (PSM+ICR) aviation accidents. As an improvement over the state-of-the-art, providing annunciation of asymmetric thrust to alert the crew may hold safety benefits. For this, the reliable detection and confirmation of asymmetric thrust conditions is required. For this work, three asymmetric thrust detection methods are presented along with their results obtained through simulation studies. Representative asymmetric thrust conditions are modeled in simulation based on failure scenarios similar to those reported in aviation incident and accident descriptions. These simulated asymmetric thrust scenarios, combined with actual aircraft operational flight data, are then used to conduct a sensitivity study regarding the detection capabilities of the three methods. Additional evaluation results are presented based on pilot-in-the-loop simulation studies conducted in the NASA Glenn Research Center (GRC) flight simulation testbed. Data obtained from this flight simulation facility are used to further evaluate the effectiveness and accuracy of the asymmetric thrust detection approaches. Generally, the asymmetric thrust conditions are correctly detected and confirmed.

  4. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  6. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.

  7. A Thermal Performance Analysis and Comparison of Fiber Coils with the D-CYL Winding and QAD Winding Methods.

    PubMed

    Li, Xuyou; Ling, Weiwei; He, Kunpeng; Xu, Zhenlong; Du, Shitong

    2016-06-16

    The thermal performance under variable temperature conditions of fiber coils with double-cylinder (D-CYL) and quadrupolar (QAD) winding methods is comparatively analyzed. Simulation by the finite element method (FEM) is done to calculate the temperature distribution and the thermal-induced phase shift errors in the fiber coils. Simulation results reveal that D-CYL fiber coil itself has fragile performance when it experiences an axially asymmetrical temperature gradient. However, the axial fragility performance could be improved when the D-CYL coil meshes with a heat-off spool. Through further simulations we find that once the D-CYL coil is provided with an axially symmetrical temperature environment, the thermal performance of fiber coils with the D-CYL winding method is better than that with the QAD winding method under the same variable temperature conditions. This valuable discovery is verified by two experiments. The D-CYL winding method is thus promising to overcome the temperature fragility of interferometric fiber optic gyroscopes (IFOGs).

  8. A Thermal Performance Analysis and Comparison of Fiber Coils with the D-CYL Winding and QAD Winding Methods

    PubMed Central

    Li, Xuyou; Ling, Weiwei; He, Kunpeng; Xu, Zhenlong; Du, Shitong

    2016-01-01

    The thermal performance under variable temperature conditions of fiber coils with double-cylinder (D-CYL) and quadrupolar (QAD) winding methods is comparatively analyzed. Simulation by the finite element method (FEM) is done to calculate the temperature distribution and the thermal-induced phase shift errors in the fiber coils. Simulation results reveal that D-CYL fiber coil itself has fragile performance when it experiences an axially asymmetrical temperature gradient. However, the axial fragility performance could be improved when the D-CYL coil meshes with a heat-off spool. Through further simulations we find that once the D-CYL coil is provided with an axially symmetrical temperature environment, the thermal performance of fiber coils with the D-CYL winding method is better than that with the QAD winding method under the same variable temperature conditions. This valuable discovery is verified by two experiments. The D-CYL winding method is thus promising to overcome the temperature fragility of interferometric fiber optic gyroscopes (IFOGs). PMID:27322271

  9. The effect of simulated microgravity on bacteria from the mir space station

    NASA Astrophysics Data System (ADS)

    Baker, Paul W.; Leff, Laura

    2004-03-01

    The effects of simulated microgravity on two bacterial isolates, Sphingobacterium thalpophilium and Ralstonia pickettii (formerly Burkholderia pickettii), originally recovered from water systems aboard the Mir space station were examined. These bacteria were inoculated into water, high and low concentrations of nutrient broth and subjected to simulated microgravity conditions. S. thalpophilium (which was motile and had flagella) showed no significant differences between simulated microgravity and the normal gravity control regardless of the method of enumeration and medium. In contrast, for R. pickettii (that was non-motile and lacked flagella), there were significantly higher numbers in high nutrient broth under simulated microgravity compared to normal gravity. Conversely, when R. pikkettii was inoculated into water (i.e., starvation conditions) significantly lower numbers were found under simulated microgravity compared to normal gravity. Responses to microgravity depended on the strain used (e.g., the motile strain exhibited no response to microgravity, while the non-motile strain did), the method of enumeration, and the nutrient concentration of the medium. Under oligotrophic conditions, non-motile cells may remain in geostationary orbit and deplete nutrients in their vicinity, while in high nutrient medium, resources surrounding the cell may be sufficient so that high growth is observed until nutrients becoming limiting.

  10. Towards Large Eddy Simulation of gas turbine compressors

    NASA Astrophysics Data System (ADS)

    McMullan, W. A.; Page, G. J.

    2012-07-01

    With increasing computing power, Large Eddy Simulation could be a useful simulation tool for gas turbine axial compressor design. This paper outlines a series of simulations performed on compressor geometries, ranging from a Controlled Diffusion Cascade stator blade to the periodic sector of a stage in a 3.5 stage axial compressor. The simulation results show that LES may offer advantages over traditional RANS methods when off-design conditions are considered - flow regimes where RANS models often fail to converge. The time-dependent nature of LES permits the resolution of transient flow structures, and can elucidate new mechanisms of vorticity generation on blade surfaces. It is shown that accurate LES is heavily reliant on both the near-wall mesh fidelity and the ability of the imposed inflow condition to recreate the conditions found in the reference experiment. For components embedded in a compressor this requires the generation of turbulence fluctuations at the inlet plane. A recycling method is developed that improves the quality of the flow in a single stage calculation of an axial compressor, and indicates that future developments in both the recycling technique and computing power will bring simulations of axial compressors within reach of industry in the coming years.

  11. The effect of simulated microgravity on bacteria from the Mir space station.

    PubMed

    Baker, Paul W; Leff, Laura

    2004-01-01

    The effects of simulated microgravity on two bacterial isolates, Sphingobacterium thalpophilium and Ralstonia pickettii (formerly Burkholderia pickettii), originally recovered from water systems aboard the Mir space station were examined. These bacteria were inoculated into water, high and low concentrations of nutrient broth and subjected to simulated microgravity conditions. S. thalpophilium (which was motile and had flagella) showed no significant differences between simulated microgravity and the normal gravity control regardless of the method of enumeration and medium. In contrast, for R. pickettii (that was non-motile and lacked flagella), there were significantly higher numbers in high nutrient broth under simulated microgravity compared to normal gravity. Conversely, when R. pikkettii was inoculated into water (i.e., starvation conditions) significantly lower numbers were found under simulated microgravity compared to normal gravity. Responses to microgravity depended on the strain used (e.g., the motile strain exhibited no response to microgravity, while the non-motile strain did), the method of enumeration, and the nutrient concentration of the medium. Under oligotrophic conditions, non-motile cells may remain in geostationary orbit and deplete nutrients in their vicinity, while in high nutrient medium, resources surrounding the cell may be sufficient so that high growth is observed until nutrients becoming limiting.

  12. The effect of simulated microgravity on bacteria from the Mir space station

    NASA Technical Reports Server (NTRS)

    Baker, Paul W.; Leff, Laura

    2004-01-01

    The effects of simulated microgravity on two bacterial isolates, Sphingobacterium thalpophilium and Ralstonia pickettii (formerly Burkholderia pickettii), originally recovered from water systems aboard the Mir space station were examined. These bacteria were inoculated into water, high and low concentrations of nutrient broth and subjected to simulated microgravity conditions. S. thalpophilium (which was motile and had flagella) showed no significant differences between simulated microgravity and the normal gravity control regardless of the method of enumeration and medium. In contrast, for R. pickettii (that was non-motile and lacked flagella), there were significantly higher numbers in high nutrient broth under simulated microgravity compared to normal gravity. Conversely, when R. pikkettii was inoculated into water (i.e., starvation conditions) significantly lower numbers were found under simulated microgravity compared to normal gravity. Responses to microgravity depended on the strain used (e.g., the motile strain exhibited no response to microgravity, while the non-motile strain did), the method of enumeration, and the nutrient concentration of the medium. Under oligotrophic conditions, non-motile cells may remain in geostationary orbit and deplete nutrients in their vicinity, while in high nutrient medium, resources surrounding the cell may be sufficient so that high growth is observed until nutrients becoming limiting.

  13. A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Cheevatanarak, Suchittra

    Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…

  14. Automated Boundary Conditions for Wind Tunnel Simulations

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee

    2018-01-01

    Computational fluid dynamic (CFD) simulations of models tested in wind tunnels require a high level of fidelity and accuracy particularly for the purposes of CFD validation efforts. Considerable effort is required to ensure the proper characterization of both the physical geometry of the wind tunnel and recreating the correct flow conditions inside the wind tunnel. The typical trial-and-error effort used for determining the boundary condition values for a particular tunnel configuration are time and computer resource intensive. This paper describes a method for calculating and updating the back pressure boundary condition in wind tunnel simulations by using a proportional-integral-derivative controller. The controller methodology and equations are discussed, and simulations using the controller to set a tunnel Mach number in the NASA Langley 14- by 22-Foot Subsonic Tunnel are demonstrated.

  15. DSMC simulations of Mach 20 nitrogen flows about a 70 degree blunted cone and its wake

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Dogra, Virendra K.; Wilmoth, Richard G.

    1993-01-01

    Numerical results obtained with the direct simulation Monte Carlo (DSMC) method are presented for Mach 20 nitrogen flow about a 70-deg blunted cone. The flow conditions simulated are those that can be obtained in existing low-density hypersonic wind tunnels. Three sets of flow conditions are simulated with freestream Knudsen numbers ranging from 0.03 to 0.001. The focus is to characterize the wake flow under rarefied conditions. This is accomplished by calculating the influence of rarefaction on wake structure along with the impact that an afterbody has on flow features. This data report presents extensive information concerning flowfield features and surface quantities.

  16. Joint modelling compared with two stage methods for analysing longitudinal data and prospective outcomes: A simulation study of childhood growth and BP.

    PubMed

    Sayers, A; Heron, J; Smith, Adac; Macdonald-Wallis, C; Gilthorpe, M S; Steele, F; Tilling, K

    2017-02-01

    There is a growing debate with regards to the appropriate methods of analysis of growth trajectories and their association with prospective dependent outcomes. Using the example of childhood growth and adult BP, we conducted an extensive simulation study to explore four two-stage and two joint modelling methods, and compared their bias and coverage in estimation of the (unconditional) association between birth length and later BP, and the association between growth rate and later BP (conditional on birth length). We show that the two-stage method of using multilevel models to estimate growth parameters and relating these to outcome gives unbiased estimates of the conditional associations between growth and outcome. Using simulations, we demonstrate that the simple methods resulted in bias in the presence of measurement error, as did the two-stage multilevel method when looking at the total (unconditional) association of birth length with outcome. The two joint modelling methods gave unbiased results, but using the re-inflated residuals led to undercoverage of the confidence intervals. We conclude that either joint modelling or the simpler two-stage multilevel approach can be used to estimate conditional associations between growth and later outcomes, but that only joint modelling is unbiased with nominal coverage for unconditional associations.

  17. Fuzzy simulation in concurrent engineering

    NASA Technical Reports Server (NTRS)

    Kraslawski, A.; Nystrom, L.

    1992-01-01

    Concurrent engineering is becoming a very important practice in manufacturing. A problem in concurrent engineering is the uncertainty associated with the values of the input variables and operating conditions. The problem discussed in this paper concerns the simulation of processes where the raw materials and the operational parameters possess fuzzy characteristics. The processing of fuzzy input information is performed by the vertex method and the commercial simulation packages POLYMATH and GEMS. The examples are presented to illustrate the usefulness of the method in the simulation of chemical engineering processes.

  18. Advanced Hybrid Modeling of Hall Thruster Plumes

    DTIC Science & Technology

    2010-06-16

    Hall thruster operated in the Large Vacuum Test Facility at the University of Michigan. The approach utilizes the direct simulation Monte Carlo method and the Particle-in-Cell method to simulate the collision and plasma dynamics of xenon neutrals and ions. The electrons are modeled as a fluid using conservation equations. A second code is employed to model discharge chamber behavior to provide improved input conditions at the thruster exit for the plume simulation. Simulation accuracy is assessed using experimental data previously

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  20. ReaxFF based molecular dynamics simulations of ignition front propagation in hydrocarbon/oxygen mixtures under high temperature and pressure conditions.

    PubMed

    Ashraf, Chowdhury; Jain, Abhishek; Xuan, Yuan; van Duin, Adri C T

    2017-02-15

    In this paper, we present the first atomistic-scale based method for calculating ignition front propagation speed and hypothesize that this quantity is related to laminar flame speed. This method is based on atomistic-level molecular dynamics (MD) simulations with the ReaxFF reactive force field. Results reported in this study are for supercritical (P = 55 MPa and T u = 1800 K) combustion of hydrocarbons as elevated pressure and temperature are required to accelerate the dynamics for reactive MD simulations. These simulations are performed for different types of hydrocarbons, including alkyne, alkane, and aromatic, and are able to successfully reproduce the experimental trend of reactivity of these hydrocarbons. Moreover, our results indicate that the ignition front propagation speed under supercritical conditions has a strong dependence on equivalence ratio, similar to experimentally measured flame speeds at lower temperatures and pressures which supports our hypothesis that ignition front speed is a related quantity to laminar flame speed. In addition, comparisons between results obtained from ReaxFF simulation and continuum simulations performed under similar conditions show good qualitative, and reasonable quantitative agreement. This demonstrates that ReaxFF based MD-simulations are a promising tool to study flame speed/ignition front speed in supercritical hydrocarbon combustion.

  1. Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zhu, G.; Chen, X.

    2011-12-01

    We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.

  2. Human swallowing simulation based on videofluorography images using Hamiltonian MPS method

    NASA Astrophysics Data System (ADS)

    Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi

    2015-09-01

    In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.

  3. An Application of a Stochastic Semi-Continuous Simulation Method for Flood Frequency Analysis: A Case Study in Slovakia

    NASA Astrophysics Data System (ADS)

    Valent, Peter; Paquet, Emmanuel

    2017-09-01

    A reliable estimate of extreme flood characteristics has always been an active topic in hydrological research. Over the decades a large number of approaches and their modifications have been proposed and used, with various methods utilizing continuous simulation of catchment runoff, being the subject of the most intensive research in the last decade. In this paper a new and promising stochastic semi-continuous method is used to estimate extreme discharges in two mountainous Slovak catchments of the rivers Váh and Hron, in which snow-melt processes need to be taken into account. The SCHADEX method used, couples a precipitation probabilistic model with a rainfall-runoff model used to both continuously simulate catchment hydrological conditions and to transform generated synthetic rainfall events into corresponding discharges. The stochastic nature of the method means that a wide range of synthetic rainfall events were simulated on various historical catchment conditions, taking into account not only the saturation of soil, but also the amount of snow accumulated in the catchment. The results showed that the SCHADEX extreme discharge estimates with return periods of up to 100 years were comparable to those estimated by statistical approaches. In addition, two reconstructed historical floods with corresponding return periods of 100 and 1000 years were compared to the SCHADEX estimates. The results confirmed the usability of the method for estimating design discharges with a recurrence interval of more than 100 years and its applicability in Slovak conditions.

  4. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundquist, J K; Mirocha, J D; Chow, F K

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting thatmore » a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.« less

  5. Computer simulation of liquid metals

    NASA Astrophysics Data System (ADS)

    Belashchenko, D. K.

    2013-12-01

    Methods for and the results of the computer simulation of liquid metals are reviewed. Two basic methods, classical molecular dynamics with known interparticle potentials and the ab initio method, are considered. Most attention is given to the simulated results obtained using the embedded atom model (EAM). The thermodynamic, structural, and diffusion properties of liquid metal models under normal and extreme (shock) pressure conditions are considered. Liquid-metal simulated results for the Groups I - IV elements, a number of transition metals, and some binary systems (Fe - C, Fe - S) are examined. Possibilities for the simulation to account for the thermal contribution of delocalized electrons to energy and pressure are considered. Solidification features of supercooled metals are also discussed.

  6. Molecular Simulation of the Phase Diagram of Methane Hydrate: Free Energy Calculations, Direct Coexistence Method, and Hyperparallel Tempering.

    PubMed

    Jin, Dongliang; Coasne, Benoit

    2017-10-24

    Different molecular simulation strategies are used to assess the stability of methane hydrate under various temperature and pressure conditions. First, using two water molecular models, free energy calculations consisting of the Einstein molecule approach in combination with semigrand Monte Carlo simulations are used to determine the pressure-temperature phase diagram of methane hydrate. With these calculations, we also estimate the chemical potentials of water and methane and methane occupancy at coexistence. Second, we also consider two other advanced molecular simulation techniques that allow probing the phase diagram of methane hydrate: the direct coexistence method in the Grand Canonical ensemble and the hyperparallel tempering Monte Carlo method. These two direct techniques are found to provide stability conditions that are consistent with the pressure-temperature phase diagram obtained using rigorous free energy calculations. The phase diagram obtained in this work, which is found to be consistent with previous simulation studies, is close to its experimental counterpart provided the TIP4P/Ice model is used to describe the water molecule.

  7. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  8. Design and implementation of an air-conditioning system with storage tank for load shifting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Y.Y.; Wu, C.J.; Liou, K.L.

    1987-11-01

    The experience with the design, simulation and implementation of an air-conditioning system with chilled water storage tank is presented in this paper. The system is used to shift air-conditioning load of residential and commercial buildings from on-peak to off-peak period. Demand-side load management can thus be achieved if many buildings are equipped with such storage devices. In the design of this system, a lumped-parameter circuit model is first employed to simulate the heat transfer within the air-conditioned building such that the required capacity of the storage tank can be figured out. Then, a set of desirable parameters for the temperaturemore » controller of the system are determined using the parameter plane method and the root locus method. The validity of the proposed mathematical model and design approach is verified by comparing the results obtained from field tests with those from the computer simulations. Cost-benefit analysis of the system is also discussed.« less

  9. An Absorbing Boundary Condition for the Lattice Boltzmann Method Based on the Perfectly Matched Layer

    PubMed Central

    Najafi-Yazdi, A.; Mongeau, L.

    2012-01-01

    The Lattice Boltzmann Method (LBM) is a well established computational tool for fluid flow simulations. This method has been recently utilized for low Mach number computational aeroacoustics. Robust and nonreflective boundary conditions, similar to those used in Navier-Stokes solvers, are needed for LBM-based aeroacoustics simulations. The goal of the present study was to develop an absorbing boundary condition based on the perfectly matched layer (PML) concept for LBM. The derivation of formulations for both two and three dimensional problems are presented. The macroscopic behavior of the new formulation is discussed. The new formulation was tested using benchmark acoustic problems. The perfectly matched layer concept appears to be very well suited for LBM, and yielded very low acoustic reflection factor. PMID:23526050

  10. Affected States Soft Independent Modeling by Class Analogy from the Relation Between Independent Variables, Number of Independent Variables and Sample Size

    PubMed Central

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz

    2013-01-01

    Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065

  11. Scalar conservation and boundedness in simulations of compressible flow

    NASA Astrophysics Data System (ADS)

    Subbareddy, Pramod K.; Kartha, Anand; Candler, Graham V.

    2017-11-01

    With the proper combination of high-order, low-dissipation numerical methods, physics-based subgrid-scale models, and boundary conditions it is becoming possible to simulate many combustion flows at relevant conditions. However, non-premixed flows are a particular challenge because the thickness of the fuel/oxidizer interface scales inversely with Reynolds number. Sharp interfaces can also be present in the initial or boundary conditions. When higher-order numerical methods are used, there are often aphysical undershoots and overshoots in the scalar variables (e.g. passive scalars, species mass fractions or progress variable). These numerical issues are especially prominent when low-dissipation methods are used, since sharp jumps in flow variables are not always coincident with regions of strong variation in the scalar fields: consequently, special detection mechanisms and dissipative fluxes are needed. Most numerical methods diffuse the interface, resulting in artificial mixing and spurious reactions. In this paper, we propose a numerical method that mitigates this issue. We present methods for passive and active scalars, and demonstrate their effectiveness with several examples.

  12. Properties of Syntactic Foam for Simulation of Mechanical Insults.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, Neal Benson; Haulenbeek, Kimberly K.; Spletzer, Matthew A.

    Syntactic foam encapsulation protects sensitive components. The energy mitigated by the foam is calculated with numerical simulations. The properties of a syntactic foam consisting of a mixture of an epoxy-rubber adduct and glass microballoons are obtained from published literature and test results. The conditions and outcomes of the tests are discussed. The method for converting published properties and test results to input for finite element models is described. Simulations of the test conditions are performed to validate the inputs.

  13. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Moin, Parviz

    2003-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.

  14. Immersed boundary methods for simulating fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Yang, Xiaolei

    2014-02-01

    Fluid-structure interaction (FSI) problems commonly encountered in engineering and biological applications involve geometrically complex flexible or rigid bodies undergoing large deformations. Immersed boundary (IB) methods have emerged as a powerful simulation tool for tackling such flows due to their inherent ability to handle arbitrarily complex bodies without the need for expensive and cumbersome dynamic re-meshing strategies. Depending on the approach such methods adopt to satisfy boundary conditions on solid surfaces they can be broadly classified as diffused and sharp interface methods. In this review, we present an overview of the fundamentals of both classes of methods with emphasis on solution algorithms for simulating FSI problems. We summarize and juxtapose different IB approaches for imposing boundary conditions, efficient iterative algorithms for solving the incompressible Navier-Stokes equations in the presence of dynamic immersed boundaries, and strong and loose coupling FSI strategies. We also present recent results from the application of such methods to study a wide range of problems, including vortex-induced vibrations, aquatic swimming, insect flying, human walking and renewable energy. Limitations of such methods and the need for future research to mitigate them are also discussed.

  15. Ability of College Students to Simulate ADHD on Objective Measures of Attention

    ERIC Educational Resources Information Center

    Booksh, Randee Lee; Pella, Russell D.; Singh, Ashvind N.; Gouvier, William Drew

    2010-01-01

    Objective: The authors examined the ability of college students to simulate ADHD symptoms on objective and self-report measures and the relationship between knowledge of ADHD and ability to simulate ADHD. Method: Undergraduate students were assigned to a control or a simulated ADHD malingering condition and compared with a clinical AD/HD group.…

  16. Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production

    NASA Astrophysics Data System (ADS)

    Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne

    2018-05-01

    A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.

  17. Star tracking method based on multiexposure imaging for intensified star trackers.

    PubMed

    Yu, Wenbo; Jiang, Jie; Zhang, Guangjun

    2017-07-20

    The requirements for the dynamic performance of star trackers are rapidly increasing with the development of space exploration technologies. However, insufficient knowledge of the angular acceleration has largely decreased the performance of the existing star tracking methods, and star trackers may even fail to track under highly dynamic conditions. This study proposes a star tracking method based on multiexposure imaging for intensified star trackers. The accurate estimation model of the complete motion parameters, including the angular velocity and angular acceleration, is established according to the working characteristic of multiexposure imaging. The estimation of the complete motion parameters is utilized to generate the predictive star image accurately. Therefore, the correct matching and tracking between stars in the real and predictive star images can be reliably accomplished under highly dynamic conditions. Simulations with specific dynamic conditions are conducted to verify the feasibility and effectiveness of the proposed method. Experiments with real starry night sky observation are also conducted for further verification. Simulations and experiments demonstrate that the proposed method is effective and shows excellent performance under highly dynamic conditions.

  18. An efficient 3-D eddy-current solver using an independent impedance method for transcranial magnetic stimulation.

    PubMed

    De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc

    2011-02-01

    In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.

  19. Spatio-Temporal Process Simulation of Dam-Break Flood Based on SPH

    NASA Astrophysics Data System (ADS)

    Wang, H.; Ye, F.; Ouyang, S.; Li, Z.

    2018-04-01

    On the basis of introducing the SPH (Smooth Particle Hydrodynamics) simulation method, the key research problems were given solutions in this paper, which ere the spatial scale and temporal scale adapting to the GIS(Geographical Information System) application, the boundary condition equations combined with the underlying surface, and the kernel function and parameters applicable to dam-break flood simulation. In this regards, a calculation method of spatio-temporal process emulation with elaborate particles for dam-break flood was proposed. Moreover the spatio-temporal process was dynamic simulated by using GIS modelling and visualization. The results show that the method gets more information, objectiveness and real situations.

  20. A finite-difference time-domain electromagnetic solver in a generalized coordinate system

    NASA Astrophysics Data System (ADS)

    Hochberg, Timothy Allen

    A new, finite-difference, time-domain method for the simulation of full-wave electromagnetic wave propogation in complex structures is developed. This method is simple and flexible; it allows for the simulation of transient wave propogation in a large class of practical structures. Boundary conditions are implemented for perfect and imperfect electrically conducting boundaries, perfect magnetically conducting boundaries, and absorbing boundaries. The method is validated with the aid of several different types of test cases. Two types of coaxial cables with helical breaks are simulated and the results are discussed.

  1. Optical simulation of flying targets using physically based renderer

    NASA Astrophysics Data System (ADS)

    Cheng, Ye; Zheng, Quan; Peng, Junkai; Lv, Pin; Zheng, Changwen

    2018-02-01

    The simulation of aerial flying targets is widely needed in many fields. This paper proposes a physically based method for optical simulation of flying targets. In the first step, three-dimensional target models are built and the motion speed and direction are defined. Next, the material of the outward appearance of a target is also simulated. Then the illumination conditions are defined. After all definitions are given, all settings are encoded in a description file. Finally, simulated results are generated by Monte Carlo ray tracing in a physically based renderer. Experiments show that this method is able to simulate materials, lighting and motion blur for flying targets, and it can generate convincing and highquality simulation results.

  2. Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set

    USGS Publications Warehouse

    Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.

    1996-01-01

    This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.

  3. Analyses of Fatigue Crack Growth and Closure Near Threshold Conditions for Large-Crack Behavior

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.

    1999-01-01

    A plasticity-induced crack-closure model was used to study fatigue crack growth and closure in thin 2024-T3 aluminum alloy under constant-R and constant-K(sub max) threshold testing procedures. Two methods of calculating crack-opening stresses were compared. One method was based on a contact-K analyses and the other on crack-opening-displacement (COD) analyses. These methods gave nearly identical results under constant-amplitude loading, but under threshold simulations the contact-K analyses gave lower opening stresses than the contact COD method. Crack-growth predictions tend to support the use of contact-K analyses. Crack-growth simulations showed that remote closure can cause a rapid rise in opening stresses in the near threshold regime for low-constraint and high applied stress levels. Under low applied stress levels and high constraint, a rise in opening stresses was not observed near threshold conditions. But crack-tip-opening displacement (CTOD) were of the order of measured oxide thicknesses in the 2024 alloy under constant-R simulations. In contrast, under constant-K(sub max) testing the CTOD near threshold conditions were an order-of-magnitude larger than measured oxide thicknesses. Residual-plastic deformations under both constant-R and constant-K(sub max) threshold simulations were several times larger than the expected oxide thicknesses. Thus, residual-plastic deformations, in addition to oxide and roughness, play an integral part in threshold development.

  4. Some recent developments of the immersed interface method for flow simulation

    NASA Astrophysics Data System (ADS)

    Xu, Sheng

    2017-11-01

    The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.

  5. Implementation of Slater Boundary Condition into OVERFLOW

    NASA Astrophysics Data System (ADS)

    Duncan, Sean

    Bleed is one of the primary methods of controlling the flow within a mixed compression inlet. In this work the Slater boundary condition, first applied in WindUS, is implemented in OVERFLOW. Further, a simulation using discrete holes is run in order to show the differences between use of the boundary condition and use of the bleed hole geometry. Recent tests at Wright Patterson Air Force Base seek to provide a baseline for study of mixed compression inlets. The inlet used by the Air Force Research Laboratory is simulated in the modified OVERFLOW. The results from the experiment are compared to the CFD to qualitatively assess the accuracy of the simulations. The boundary condition is shown to be robust and viable in studying bleed.

  6. Boundary pint corrections for variable radius plots - simulation results

    Treesearch

    Margaret Penner; Sam Otukol

    2000-01-01

    The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...

  7. MAESTRO: Methods and Advanced Equipment for Simulation and Treatment in Radio-Oncology

    NASA Astrophysics Data System (ADS)

    Barthe, Jean; Hugon, Régis; Nicolai, Jean Philippe

    2007-12-01

    The integrated project MAESTRO (Methods and Advanced Equipment for Simulation and Treatment in Radio-Oncology) under contract with the European Commission in life sciences FP6 (LSHC-CT-2004-503564), concerns innovative research to develop and validate in clinical conditions, advanced methods and equipment needed in cancer treatment for new modalities in high-conformal external radiotherapy using electrons, photons and protons beams of high energy.

  8. Realistic soft tissue deformation strategies for real time surgery simulation.

    PubMed

    Shen, Yunhe; Zhou, Xiangmin; Zhang, Nan; Tamma, Kumar; Sweet, Robert

    2008-01-01

    A volume-preserving deformation method (VPDM) is developed in complement with the mass-spring method (MSM) to improve the deformation quality of the MSM to model soft tissue in surgical simulation. This method can also be implemented as a stand-alone model. The proposed VPDM satisfies the Newton's laws of motion by obtaining the resultant vectors form an equilibrium condition. The proposed method has been tested in virtual surgery systems with haptic rendering demands.

  9. Full-Envelope Launch Abort System Performance Analysis Methodology

    NASA Technical Reports Server (NTRS)

    Aubuchon, Vanessa V.

    2014-01-01

    The implementation of a new dispersion methodology is described, which dis-perses abort initiation altitude or time along with all other Launch Abort System (LAS) parameters during Monte Carlo simulations. In contrast, the standard methodology assumes that an abort initiation condition is held constant (e.g., aborts initiated at altitude for Mach 1, altitude for maximum dynamic pressure, etc.) while dispersing other LAS parameters. The standard method results in large gaps in performance information due to the discrete nature of initiation conditions, while the full-envelope dispersion method provides a significantly more comprehensive assessment of LAS abort performance for the full launch vehicle ascent flight envelope and identifies performance "pinch-points" that may occur at flight conditions outside of those contained in the discrete set. The new method has significantly increased the fidelity of LAS abort simulations and confidence in the results.

  10. Affected States soft independent modeling by class analogy from the relation between independent variables, number of independent variables and sample size.

    PubMed

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz

    2013-03-01

    The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.

  11. Resolved-particle simulation by the Physalis method: Enhancements and new capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierakowski, Adam J., E-mail: sierakowski@jhu.edu; Prosperetti, Andrea; Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede

    2016-03-15

    We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrativemore » simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.« less

  12. The resistance of the lichen Circinaria gyrosa (nom. provis.) towards simulated Mars conditions—a model test for the survival capacity of an eukaryotic extremophile

    NASA Astrophysics Data System (ADS)

    Sánchez, F. J.; Mateo-Martí, E.; Raggio, J.; Meeßen, J.; Martínez-Frías, J.; Sancho, L. G.a..; Ott, S.; de la Torre, R.

    2012-11-01

    The "Planetary Atmospheres and Surfaces Chamber" (PASC, at Centro de Astrobiología, INTA, Madrid) is able to simulate the atmosphere and surface temperature of most of the solar system planets. PASC is especially appropriate to study irradiation induced changes of geological, chemical, and biological samples under a wide range of controlled atmospheric and temperature conditions. Therefore, PASC is a valid method to test the resistance potential of extremophile organisms under diverse harsh conditions and thus assess the habitability of extraterrestrial environments. In the present study, we have investigated the resistance of a symbiotic organism under simulated Mars conditions, exemplified with the lichen Circinaria gyrosa - an extremophilic eukaryote. After 120 hours of exposure to simulated but representative Mars atmosphere, temperature, pressure and UV conditions; an unaltered photosynthetic performance demonstrated high resistance of the lichen photobiont.

  13. A review of hybrid implicit explicit finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Chen, Juan

    2018-06-01

    The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.

  14. Numerical simulation of tonal fan noise of computers and air conditioning systems

    NASA Astrophysics Data System (ADS)

    Aksenov, A. A.; Gavrilyuk, V. N.; Timushev, S. F.

    2016-07-01

    Current approaches to fan noise simulation are mainly based on the Lighthill equation and socalled aeroacoustic analogy, which are also based on the transformed Lighthill equation, such as the wellknown FW-H equation or the Kirchhoff theorem. A disadvantage of such methods leading to significant modeling errors is associated with incorrect solution of the decomposition problem, i.e., separation of acoustic and vortex (pseudosound) modes in the area of the oscillation source. In this paper, we propose a method for tonal noise simulation based on the mesh solution of the Helmholtz equation for the Fourier transform of pressure perturbation with boundary conditions in the form of the complex impedance. A noise source is placed on the surface surrounding each fan rotor. The acoustic fan power is determined by the acoustic-vortex method, which ensures more accurate decomposition and determination of the pressure pulsation amplitudes in the near field of the fan.

  15. Modelling Geomechanical Heterogeneity of Rock Masses Using Direct and Indirect Geostatistical Conditional Simulation Methods

    NASA Astrophysics Data System (ADS)

    Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald

    2017-12-01

    An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.

  16. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    NASA Astrophysics Data System (ADS)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  17. Detached Eddy Simulation Results for a Space Launch System Configuration at Liftoff Conditions and Comparison with Experiment

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Ghaffari, Farhad

    2015-01-01

    Computational simulations for a Space Launch System configuration at liftoff conditions for incidence angles from 0 to 90 degrees were conducted in order to generate integrated force and moment data and longitudinal lineloads. While the integrated force and moment coefficients can be obtained from wind tunnel testing, computational analyses are indispensable in obtaining the extensive amount of surface information required to generate proper lineloads. However, beyond an incidence angle of about 15 degrees, the effects of massive flow separation on the leeward pressure field is not well captured with state of the art Reynolds Averaged Navier-Stokes methods, necessitating the employment of a Detached Eddy Simulation method. Results from these simulations are compared to the liftoff force and moment database and surface pressure data derived from a test in the NASA Langley 14- by 22-Foot Subsonic Wind Tunnel.

  18. Thermal Simulations, Open Boundary Conditions and Switches

    NASA Astrophysics Data System (ADS)

    Burnier, Yannis; Florio, Adrien; Kaczmarek, Olaf; Mazur, Lukas

    2018-03-01

    SU(N) gauge theories on compact spaces have a non-trivial vacuum structure characterized by a countable set of topological sectors and their topological charge. In lattice simulations, every topological sector needs to be explored a number of times which reflects its weight in the path integral. Current lattice simulations are impeded by the so-called freezing of the topological charge problem. As the continuum is approached, energy barriers between topological sectors become well defined and the simulations get trapped in a given sector. A possible way out was introduced by Lüscher and Schaefer using open boundary condition in the time extent. However, this solution cannot be used for thermal simulations, where the time direction is required to be periodic. In this proceedings, we present results obtained using open boundary conditions in space, at non-zero temperature. With these conditions, the topological charge is not quantized and the topological barriers are lifted. A downside of this method are the strong finite-size effects introduced by the boundary conditions. We also present some exploratory results which show how these conditions could be used on an algorithmic level to reshuffle the system and generate periodic configurations with non-zero topological charge.

  19. Rapid Optimal SPH Particle Distributions in Spherical Geometries For Creating Astrophysical Initial Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raskin, Cody; Owen, J. Michael

    Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here in this paper, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such asmore » planets with core–mantle boundaries.« less

  20. Rapid Optimal SPH Particle Distributions in Spherical Geometries For Creating Astrophysical Initial Conditions

    DOE PAGES

    Raskin, Cody; Owen, J. Michael

    2016-03-24

    Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here in this paper, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such asmore » planets with core–mantle boundaries.« less

  1. RAPID OPTIMAL SPH PARTICLE DISTRIBUTIONS IN SPHERICAL GEOMETRIES FOR CREATING ASTROPHYSICAL INITIAL CONDITIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raskin, Cody; Owen, J. Michael

    2016-04-01

    Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such as planets with core–mantlemore » boundaries.« less

  2. Assimilating Flow Data into Complex Multiple-Point Statistical Facies Models Using Pilot Points Method

    NASA Astrophysics Data System (ADS)

    Ma, W.; Jafarpour, B.

    2017-12-01

    We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  3. Experiments and simulations of Richtmyer-Meshkov Instability with measured,volumetric initial conditions

    NASA Astrophysics Data System (ADS)

    Sewell, Everest; Ferguson, Kevin; Jacobs, Jeffrey; Greenough, Jeff; Krivets, Vitaliy

    2016-11-01

    We describe experiments of single-shock Richtmyer-Meskhov Instability (RMI) performed on the shock tube apparatus at the University of Arizona in which the initial conditions are volumetrically imaged prior to shock wave arrival. Initial perturbations play a major role in the evolution of RMI, and previous experimental efforts only capture a single plane of the initial condition. The method presented uses a rastered laser sheet to capture additional images throughout the depth of the initial condition immediately before the shock arrival time. These images are then used to reconstruct a volumetric approximation of the experimental perturbation. Analysis of the initial perturbations is performed, and then used as initial conditions in simulations using the hydrodynamics code ARES, developed at Lawrence Livermore National Laboratory (LLNL). Experiments are presented and comparisons are made with simulation results.

  4. Detection of air-gap eccentricity and broken-rotor bar conditions in a squirrel-cage induction motor using the radial flux sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Don-Ha; Woo, Byung-Chul; Sun, Jong-Ho

    2008-04-01

    A new method for detecting eccentricity and broken rotor bar conditions in a squirrel-cage induction motor is proposed. Air-gap flux variation analysis is done using search coils, which are inserted at stator slots. Using this method, the leakage flux in radial direction can be directly detected. Using finite element method, the air-gap flux variation is accurately modeled and analyzed. From the results of the simulation, a motor under normal condition shows maximum magnetic flux density of 1.3 T. On the other hand, the eccentric air-gap condition displays about 1.1 T at 60 deg. and 1.6 T at 240 deg. Amore » difference of flux density is 0.5 T in the abnormal condition, whereas no difference is detected in the normal motor. In the broken rotor bar conditions, the flux densities at 65 deg. and 155 deg. are about 0.4 T and 0.8 T, respectively. These simulation results are coincided with those of experiment. Consequently, the measurement of the magnetic flux at air gap is one of effective ways to discriminate the faulted conditions of the eccentricity and broken rotor bars.« less

  5. Prospective Educational Applications of Mental Simulation: A Meta-Review

    ERIC Educational Resources Information Center

    van Meer, Josephine P.; Theunissen, Nicolet C. M.

    2009-01-01

    This paper focuses on the potential of mental simulation (mentally rehearsing an action to enhance performance) as a useful contemporary educational method. By means of a meta-review, it is examined which conditions impede or facilitate the effectiveness of mental simulation (MS). A computer search was conducted using Ovid PsycINFO. Reviews,…

  6. Simulating Pressure Profiles for the Free-Electron Laser Photoemission Gun Using Molflow+

    NASA Astrophysics Data System (ADS)

    Song, Diego; Hernandez-Garcia, Carlos

    2012-10-01

    The Jefferson Lab Free Electron Laser (FEL) generates tunable laser light by passing a relativistic electron beam generated in a high-voltage DC electron gun with a semiconducting photocathode through a magnetic undulator. The electron gun is in stringent vacuum conditions in order to guarantee photocathode longevity. Considering an upgrade of the electron gun, this project consists of simulating pressure profiles to determine if the novel design meets the electron gun vacuum requirements. The method of simulation employs the software Molflow+, developed by R. Kersevan at the Organisation Europ'eene pour la Recherche Nucl'eaire (CERN), which uses the test-particle Monte Carlo method to simulate molecular flows in 3D structures. Pressure is obtained along specified chamber axes. Results are then compared to measured pressure values from the existing gun for validation. Outgassing rates, surface area, and pressure were found to be proportionally related. The simulations indicate that the upgrade gun vacuum chamber requires more pumping compared to its predecessor, while it holds similar vacuum conditions. The ability to simulate pressure profiles through tools like Molflow+, allows researchers to optimize vacuum systems during the engineering process.

  7. Simulating reservoir lithologies by an actively conditioned Markov chain model

    NASA Astrophysics Data System (ADS)

    Feng, Runhai; Luthi, Stefan M.; Gisolf, Dries

    2018-06-01

    The coupled Markov chain model can be used to simulate reservoir lithologies between wells, by conditioning them on the observed data in the cored wells. However, with this method, only the state at the same depth as the current cell is going to be used for conditioning, which may be a problem if the geological layers are dipping. This will cause the simulated lithological layers to be broken or to become discontinuous across the reservoir. In order to address this problem, an actively conditioned process is proposed here, in which a tolerance angle is predefined. The states contained in the region constrained by the tolerance angle will be employed for conditioning in the horizontal chain first, after which a coupling concept with the vertical chain is implemented. In order to use the same horizontal transition matrix for different future states, the tolerance angle has to be small. This allows the method to work in reservoirs without complex structures caused by depositional processes or tectonic deformations. Directional artefacts in the modeling process are avoided through a careful choice of the simulation path. The tolerance angle and dipping direction of the strata can be obtained from a correlation between wells, or from seismic data, which are available in most hydrocarbon reservoirs, either by interpretation or by inversion that can also assist the construction of a horizontal probability matrix.

  8. Effects of Missing Data Methods in SEM under Conditions of Incomplete and Nonnormal Data

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2017-01-01

    Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate,…

  9. Scaling Methods for Simulating Aircraft In-Flight Icing Encounters

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Ruff, Gary A.

    1997-01-01

    This paper discusses scaling methods which permit the use of subscale models in icing wind tunnels to simulate natural flight in icing. Natural icing conditions exist when air temperatures are below freezing but cloud water droplets are super-cooled liquid. Aircraft flying through such clouds are susceptible to the accretion of ice on the leading edges of unprotected components such as wings, tailplane and engine inlets. To establish the aerodynamic penalties of such ice accretion and to determine what parts need to be protected from ice accretion (by heating, for example), extensive flight and wind-tunnel testing is necessary for new aircraft and components. Testing in icing tunnels is less expensive than flight testing, is safer, and permits better control of the test conditions. However, because of limitations on both model size and operating conditions in wind tunnels, it is often necessary to perform tests with either size or test conditions scaled. This paper describes the theoretical background to the development of icing scaling methods, discusses four methods, and presents results of tests to validate them.

  10. Noise sensitivity of portfolio selection in constant conditional correlation GARCH models

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, I.; Kondor, I.

    2007-11-01

    This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.

  11. Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.

    1979-01-01

    The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.

  12. Performance issues for iterative solvers in device simulation

    NASA Technical Reports Server (NTRS)

    Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai

    1994-01-01

    Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.

  13. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    PubMed

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  14. Scalar conservation and boundedness in simulations of compressible flow

    DOE PAGES

    Subbareddy, Pramod K.; Kartha, Anand; Candler, Graham V.

    2017-08-07

    With the proper combination of high-order, low-dissipation numerical methods, physics-based subgrid-scale models, and boundary conditions it is becoming possible to simulate many combustion flows at relevant conditions. However, non-premixed flows are a particular challenge because the thickness of the fuel/oxidizer interface scales inversely with Reynolds number. Sharp interfaces can also be present in the initial or boundary conditions. When higher-order numerical methods are used, there are often aphysical undershoots and overshoots in the scalar variables (e.g.passive scalars, species mass fractions or progress variable). These numerical issues are especially prominent when low-dissipation methods are used, since sharp jumps in flow variablesmore » are not always coincident with regions of strong variation in the scalar fields: consequently, special detection mechanisms and dissipative fluxes are needed. Most numerical methods diffuse the interface, resulting in artificial mixing and spurious reactions. In this paper, we propose a numerical method that mitigates this issue. As a result, we present methods for passive and active scalars, and demonstrate their effectiveness with several examples.« less

  15. Proteus: a direct forcing method in the simulations of particulate flows

    NASA Astrophysics Data System (ADS)

    Feng, Zhi-Gang; Michaelides, Efstathios E.

    2005-01-01

    A new and efficient direct numerical method for the simulation of particulate flows is introduced. The method combines desired elements of the immersed boundary method, the direct forcing method and the lattice Boltzmann method. Adding a forcing term in the momentum equation enforces the no-slip condition on the boundary of a moving particle. By applying the direct forcing scheme, Proteus eliminates the need for the determination of free parameters, such as the stiffness coefficient in the penalty scheme or the two relaxation parameters in the adaptive-forcing scheme. The method presents a significant improvement over the previously introduced immersed-boundary-lattice-Boltzmann method (IB-LBM) where the forcing term was computed using a penalty method and a user-defined parameter. The method allows the enforcement of the rigid body motion of a particle in a more efficient way. Compared to the "bounce-back" scheme used in the conventional LBM, the direct-forcing method provides a smoother computational boundary for particles and is capable of achieving results at higher Reynolds number flows. By using a set of Lagrangian points to track the boundary of a particle, Proteus eliminates any need for the determination of the boundary nodes that are prescribed by the "bounce-back" scheme at every time step. It also makes computations for particles of irregular shapes simpler and more efficient. Proteus has been developed in two- as well as three-dimensions. This new method has been validated by comparing its results with those from experimental measurements for a single sphere settling in an enclosure under gravity. As a demonstration of the efficiency and capabilities of the present method, the settling of a large number (1232) of spherical particles is simulated in a narrow box under two different boundary conditions. It is found that when the no-slip boundary condition is imposed at the front and rear sides of the box the particles motion is significantly hindered. Under the periodic boundary conditions, the particles move faster. The simulations show that the sedimentation characteristics in a box with periodic boundary conditions at the two sides are very close to those found in the sedimentation of two-dimensional circular particles. In the Greek mythology Proteus is a hero, the son of Poseidon. In addition to his ability to change shapes and take different forms at will, Zeus granted him the power to make correct predictions for the future. One cannot expect better attributes from a numerical code.

  16. Dimensional Metrology of Non-rigid Parts Without Specialized Inspection Fixtures =

    NASA Astrophysics Data System (ADS)

    Sabri, Vahid

    Quality control is an important factor for manufacturing companies looking to prosper in an era of globalization, market pressures and technological advances. Functionality and product quality cannot be guaranteed without this important aspect. Manufactured parts have deviations from their nominal (CAD) shape caused by the manufacturing process. Thus, geometric inspection is a very important element in the quality control of mechanical parts. We will focus here on the geometric inspection of non-rigid (flexible) parts which are widely used in the aeronautic and automotive industries. Non-rigid parts can have different forms in a free-state condition compared with their nominal models due to residual stress and gravity loads. To solve this problem, dedicated inspection fixtures are generally used in industry to compensate for the displacement of such parts for simulating the use state in order to perform geometric inspections. These fixtures and the installation and inspection processes are expensive and time-consuming. Our aim in this thesis is therefore to develop an inspection method which eliminates the need for specialized fixtures. This is done by acquiring a point cloud from the part in a free-state condition using a contactless measuring device such as optical scanning and comparing it with the CAD model for the deviation identification. Using a non-rigid registration method and finite element analysis, we numerically inspect the profile of a non-rigid part. To do so, a simulated displacement is performed using an improved definition of displacement boundary conditions for simulating unfixed parts. In addition, we propose a numerical method for dimensional metrology of non-rigid parts in a free-state condition based on the arc length measurement by calculating the geodesic distance using the Fast Marching Method (FMM). In this thesis, we apply our developed methods on industrial non-rigid parts with free-form surfaces simulated with different types of displacement, defect, and measurement noise in order to evaluate the metrological performance of the developed methods.

  17. Optimizing Prednisolone Loading into Distiller's Dried Grain Kafirin Microparticles, and In vitro Release for Oral Delivery.

    PubMed

    Lau, Esther T L; Johnson, Stuart K; Williams, Barbara A; Mikkelsen, Deirdre; McCourt, Elizabeth; Stanley, Roger A; Mereddy, Ram; Halley, Peter J; Steadman, Kathryn J

    2017-05-19

    Kafirin microparticles have potential as colon-targeted delivery systems because of their ability to protect encapsulated material from digestive processes of the upper gastrointestinal tract (GIT). The aim was to optimize prednisolone loading into kafirin microparticles, and investigate their potential as an oral delivery system. Response surface methodology (RSM) was used to predict the optimal formulation of prednisolone loaded microparticles. Prednisolone release from the microparticles was measured in simulated conditions of the GIT. The RSM models were inadequate for predicting the relationship between starting quantities of kafirin and prednisolone, and prednisolone loading into microparticles. Compared to prednisolone released in the simulated gastric and small intestinal conditions, no additional drug release was observed in simulated colonic conditions. Hence, more insight into factors affecting drug loading into kafirin microparticles is required to improve the robustness of the RSM model. This present method of formulating prednisolone-loaded kafirin microparticles is unlikely to offer clinical benefits over commercially available dosage forms. Nevertheless, the overall amount of prednisolone released from the kafirin microparticles in conditions simulating the human GIT demonstrates their ability to prevent the release of entrapped core material. Further work developing the formulation methods may result in a delivery system that targets the lower GIT.

  18. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  19. Development of Human Posture Simulation Method for Assessing Posture Angles and Spinal Loads

    PubMed Central

    Lu, Ming-Lun; Waters, Thomas; Werren, Dwight

    2015-01-01

    Video-based posture analysis employing a biomechanical model is gaining a growing popularity for ergonomic assessments. A human posture simulation method of estimating multiple body postural angles and spinal loads from a video record was developed to expedite ergonomic assessments. The method was evaluated by a repeated measures study design with three trunk flexion levels, two lift asymmetry levels, three viewing angles and three trial repetitions as experimental factors. The study comprised two phases evaluating the accuracy of simulating self and other people’s lifting posture via a proxy of a computer-generated humanoid. The mean values of the accuracy of simulating self and humanoid postures were 12° and 15°, respectively. The repeatability of the method for the same lifting condition was excellent (~2°). The least simulation error was associated with side viewing angle. The estimated back compressive force and moment, calculated by a three dimensional biomechanical model, exhibited a range of 5% underestimation. The posture simulation method enables researchers to simultaneously quantify body posture angles and spinal loading variables with accuracy and precision comparable to on-screen posture matching methods. PMID:26361435

  20. Enhanced sampling simulations to construct free-energy landscape of protein-partner substrate interaction.

    PubMed

    Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi

    2016-03-01

    Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.

  1. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  2. Markov chains of infinite order and asymptotic satisfaction of balance: application to the adaptive integration method.

    PubMed

    Earl, David J; Deem, Michael W

    2005-04-14

    Adaptive Monte Carlo methods can be viewed as implementations of Markov chains with infinite memory. We derive a general condition for the convergence of a Monte Carlo method whose history dependence is contained within the simulated density distribution. In convergent cases, our result implies that the balance condition need only be satisfied asymptotically. As an example, we show that the adaptive integration method converges.

  3. Detection of stator winding faults in induction motors using three-phase current monitoring.

    PubMed

    Sharifi, Rasool; Ebrahimi, Mohammad

    2011-01-01

    The objective of this paper is to propose a new method for the detection of inter-turn short circuits in the stator windings of induction motors. In the previous reported methods, the supply voltage unbalance was the major difficulty, and this was solved mostly based on the sequence component impedance or current which are difficult to implement. Some other methods essentially are included in the offline methods. The proposed method is based on the motor current signature analysis and utilizes three phase current spectra to overcome the mentioned problem. Simulation results indicate that under healthy conditions, the rotor slot harmonics have the same magnitude in three phase currents, while under even 1 turn (0.3%) short circuit condition they differ from each other. Although the magnitude of these harmonics depends on the level of unbalanced voltage, they have the same magnitude in three phases in these conditions. Experiments performed under various load, fault, and supply voltage conditions validate the simulation results and demonstrate the effectiveness of the proposed technique. It is shown that the detection of resistive slight short circuits, without sensitivity to supply voltage unbalance is possible. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A post-processing method to simulate the generalized RF sheath boundary condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myra, James R.; Kohno, Haruhiko

    For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less

  5. A post-processing method to simulate the generalized RF sheath boundary condition

    DOE PAGES

    Myra, James R.; Kohno, Haruhiko

    2017-10-23

    For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less

  6. On simulation of no-slip condition in the method of discrete vortices

    NASA Astrophysics Data System (ADS)

    Shmagunov, O. A.

    2017-10-01

    When modeling flows of an incompressible fluid, it is convenient sometimes to use the method of discrete vortices (MDV), where the continuous vorticity field is approximated by a set of discrete vortex elements moving in the velocity field. The vortex elements have a clear physical interpretation, they do not require the construction of grids and are automatically adaptive, since they concentrate in the regions of greatest interest and successfully describe the flows of a non-viscous fluid. The possibility of using MDV in simulating flows of a viscous fluid was considered in the previous papers using the examples of flows past bodies with sharp edges with the no-penetration condition at solid boundaries. However, the appearance of vorticity on smooth boundaries requires the no-slip condition to be met when MDV is realized, which substantially complicates the initially simple method. In this connection, an approach is considered that allows solving the problem by simple means.

  7. A non-hydrostatic flat-bottom ocean model entirely based on Fourier expansion

    NASA Astrophysics Data System (ADS)

    Wirth, A.

    2005-01-01

    We show how to implement free-slip and no-slip boundary conditions in a three dimensional Boussinesq flat-bottom ocean model based on Fourier expansion. Our method is inspired by the immersed or virtual boundary technique in which the effect of boundaries on the flow field is modeled by a virtual force field. Our method, however, explicitly depletes the velocity on the boundary induced by the pressure, while at the same time respecting the incompressibility of the flow field. Spurious spatial oscillations remain at a negligible level in the simulated flow field when using our technique and no filtering of the flow field is necessary. We furthermore show that by using the method presented here the residual velocities at the boundaries are easily reduced to a negligible value. This stands in contradistinction to previous calculations using the immersed or virtual boundary technique. The efficiency is demonstrated by simulating a Rayleigh impulsive flow, for which the time evolution of the simulated flow is compared to an analytic solution, and a three dimensional Boussinesq simulation of ocean convection. The second instance is taken form a well studied oceanographic context: A free slip boundary condition is applied on the upper surface, the modeled sea surface, and a no-slip boundary condition to the lower boundary, the modeled ocean floor. Convergence properties of the method are investigated by solving a two dimensional stationary problem at different spatial resolutions. The work presented here is restricted to a flat ocean floor. Extensions of our method to ocean models with a realistic topography are discussed.

  8. Light-transmittance predictions under multiple-light-scattering conditions. I. Direct problem: hybrid-method approximation.

    PubMed

    Czerwiński, M; Mroczka, J; Girasole, T; Gouesbet, G; Gréhan, G

    2001-03-20

    Our aim is to present a method of predicting light transmittances through dense three-dimensional layered media. A hybrid method is introduced as a combination of the four-flux method with coefficients predicted from a Monte Carlo statistical model to take into account the actual three-dimensional geometry of the problem under study. We present the principles of the hybrid method, some exemplifying results of numerical simulations, and their comparison with results obtained from Bouguer-Lambert-Beer law and from Monte Carlo simulations.

  9. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  10. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  11. Moving charged particles in lattice Boltzmann-based electrokinetics

    NASA Astrophysics Data System (ADS)

    Kuron, Michael; Rempfer, Georg; Schornbaum, Florian; Bauer, Martin; Godenschwager, Christian; Holm, Christian; de Graaf, Joost

    2016-12-01

    The motion of ionic solutes and charged particles under the influence of an electric field and the ensuing hydrodynamic flow of the underlying solvent is ubiquitous in aqueous colloidal suspensions. The physics of such systems is described by a coupled set of differential equations, along with boundary conditions, collectively referred to as the electrokinetic equations. Capuani et al. [J. Chem. Phys. 121, 973 (2004)] introduced a lattice-based method for solving this system of equations, which builds upon the lattice Boltzmann algorithm for the simulation of hydrodynamic flow and exploits computational locality. However, thus far, a description of how to incorporate moving boundary conditions into the Capuani scheme has been lacking. Moving boundary conditions are needed to simulate multiple arbitrarily moving colloids. In this paper, we detail how to introduce such a particle coupling scheme, based on an analogue to the moving boundary method for the pure lattice Boltzmann solver. The key ingredients in our method are mass and charge conservation for the solute species and a partial-volume smoothing of the solute fluxes to minimize discretization artifacts. We demonstrate our algorithm's effectiveness by simulating the electrophoresis of charged spheres in an external field; for a single sphere we compare to the equivalent electro-osmotic (co-moving) problem. Our method's efficiency and ease of implementation should prove beneficial to future simulations of the dynamics in a wide range of complex nanoscopic and colloidal systems that were previously inaccessible to lattice-based continuum algorithms.

  12. Temporal acceleration of spatially distributed kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Abhijit; Vlachos, Dionisios G.

    The computational intensity of kinetic Monte Carlo (KMC) simulation is a major impediment in simulating large length and time scales. In recent work, an approximate method for KMC simulation of spatially uniform systems, termed the binomial {tau}-leap method, was introduced [A. Chatterjee, D.G. Vlachos, M.A. Katsoulakis, Binomial distribution based {tau}-leap accelerated stochastic simulation, J. Chem. Phys. 122 (2005) 024112], where molecular bundles instead of individual processes are executed over coarse-grained time increments. This temporal coarse-graining can lead to significant computational savings but its generalization to spatially lattice KMC simulation has not been realized yet. Here we extend the binomial {tau}-leapmore » method to lattice KMC simulations by combining it with spatially adaptive coarse-graining. Absolute stability and computational speed-up analyses for spatial systems along with simulations provide insights into the conditions where accuracy and substantial acceleration of the new spatio-temporal coarse-graining method are ensured. Model systems demonstrate that the r-time increment criterion of Chatterjee et al. obeys the absolute stability limit for values of r up to near 1.« less

  13. Using HexSim to simulate complex species, landscape, and stressor interactions

    EPA Science Inventory

    Background / Question / Methods The use of simulation models in conservation biology, landscape ecology, and other disciplines is increasing. Models are essential tools for researchers who, for example, need to forecast future conditions, weigh competing recovery and mitigation...

  14. Operationalising elaboration theory for simulation instruction design: a Delphi study.

    PubMed

    Haji, Faizal A; Khan, Rabia; Regehr, Glenn; Ng, Gary; de Ribaupierre, Sandrine; Dubrowski, Adam

    2015-06-01

    The aim of this study was to assess the feasibility of incorporating the Delphi process within the simplifying conditions method (SCM) described in elaboration theory (ET) to identify conditions impacting the complexity of procedural skills for novice learners. We generated an initial list of conditions impacting the complexity of lumbar puncture (LP) from key informant interviews (n = 5) and a literature review. Eighteen clinician-educators from six different medical specialties were subsequently recruited as expert panellists. Over three Delphi rounds, these panellists rated: (i) their agreement with the inclusion of the simple version of the conditions in a representative ('epitome') training scenario, and (ii) how much the inverse (complex) version increases LP complexity for a novice. Cronbach's α-values were used to assess inter-rater agreement. All panellists completed Rounds 1 and 2 of the survey and 17 completed Round 3. In Round 1, Cronbach's α-values were 0.89 and 0.94 for conditions that simplify and increase LP complexity, respectively; both values increased to 0.98 in Rounds 2 and 3. With the exception of 'high CSF (cerebral spinal fluid) pressure', panellists agreed with the inclusion of all conditions in the simplest (epitome) training scenario. Panellists rated patient movement, spinal anatomy, patient cooperativeness, body habitus, and the presence or absence of an experienced assistant as having the greatest impact on the complexity of LP. This study demonstrated the feasibility of using expert consensus to establish conditions impacting the complexity of procedural skills, and the benefits of incorporating the Delphi method into the SCM. These data can be used to develop and sequence simulation scenarios in a progressively challenging manner. If the theorised learning gains associated with ET are realised, the methods described in this study may be applied to the design of simulation training for other procedural and non-procedural skills, thereby advancing the agenda of theoretically based instruction design in health care simulation. © 2015 John Wiley & Sons Ltd.

  15. FDTD simulation of field performance in reverberation chamber excited by two excitation antennas

    NASA Astrophysics Data System (ADS)

    Wang, Song; Wu, Zhan-cheng; Cui, Yao-zhong

    2013-03-01

    The excitation source is one of the critical items that determine the electromagnetic fields in a reverberation chamber (RC). In order to optimize the electromagnetic fields performance, a new method of exciting RC with two antennas is proposed based on theoretical analysis. The full 3D simulation of RC is carried out by the finite difference time domain (FDTD) method on two excitation conditions of one antenna and two antennas. The broadband response of RC is obtained by fast Fourier transformation (FFT) after only one simulation. Numerical data show that the field uniformity in the test space is improved on the condition of two transmitting antennas while the normalized electric fields decreased slightly compared to the one antenna condition. It is straightforward to recognize that two antennas excitation can reduce the demands on power amplifier as the total input power is split among the two antennas, and consequently the cost of electromagnetic compatibility (EMC) test in large-scale RC can be reduced.

  16. Wetting boundary condition for the color-gradient lattice Boltzmann method: Validation with analytical and experimental data

    NASA Astrophysics Data System (ADS)

    Akai, Takashi; Bijeljic, Branko; Blunt, Martin J.

    2018-06-01

    In the color gradient lattice Boltzmann model (CG-LBM), a fictitious-density wetting boundary condition has been widely used because of its ease of implementation. However, as we show, this may lead to inaccurate results in some cases. In this paper, a new scheme for the wetting boundary condition is proposed which can handle complicated 3D geometries. The validity of our method for static problems is demonstrated by comparing the simulated results to analytical solutions in 2D and 3D geometries with curved boundaries. Then, capillary rise simulations are performed to study dynamic problems where the three-phase contact line moves. The results are compared to experimental results in the literature (Heshmati and Piri, 2014). If a constant contact angle is assumed, the simulations agree with the analytical solution based on the Lucas-Washburn equation. However, to match the experiments, we need to implement a dynamic contact angle that varies with the flow rate.

  17. Patient-specific coronary artery blood flow simulation using myocardial volume partitioning

    NASA Astrophysics Data System (ADS)

    Kim, Kyung Hwan; Kang, Dongwoo; Kang, Nahyup; Kim, Ji-Yeon; Lee, Hyong-Euk; Kim, James D. K.

    2013-03-01

    Using computational simulation, we can analyze cardiovascular disease in non-invasive and quantitative manners. More specifically, computational modeling and simulation technology has enabled us to analyze functional aspect such as blood flow, as well as anatomical aspect such as stenosis, from medical images without invasive measurements. Note that the simplest ways to perform blood flow simulation is to apply patient-specific coronary anatomy with other average-valued properties; in this case, however, such conditions cannot fully reflect accurate physiological properties of patients. To resolve this limitation, we present a new patient-specific coronary blood flow simulation method by myocardial volume partitioning considering artery/myocardium structural correspondence. We focus on that blood supply is closely related to the mass of each myocardial segment corresponding to the artery. Therefore, we applied this concept for setting-up simulation conditions in the way to consider many patient-specific features as possible from medical image: First, we segmented coronary arteries and myocardium separately from cardiac CT; then the myocardium is partitioned into multiple regions based on coronary vasculature. The myocardial mass and required blood mass for each artery are estimated by converting myocardial volume fraction. Finally, the required blood mass is used as boundary conditions for each artery outlet, with given average aortic blood flow rate and pressure. To show effectiveness of the proposed method, fractional flow reserve (FFR) by simulation using CT image has been compared with invasive FFR measurement of real patient data, and as a result, 77% of accuracy has been obtained.

  18. High Fidelity Simulation of Primary Atomization in Diesel Engine Sprays

    NASA Astrophysics Data System (ADS)

    Ivey, Christopher; Bravo, Luis; Kim, Dokyun

    2014-11-01

    A high-fidelity numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at ambient conditions has been performed. A full understanding of the primary atomization process in fuel injection of diesel has not been achieved for several reasons including the difficulties accessing the optically dense region. Due to the recent advances in numerical methods and computing resources, high fidelity simulations of atomizing flows are becoming available to provide new insights of the process. In the present study, an unstructured un-split Volume-of-Fluid (VoF) method coupled to a stochastic Lagrangian spray model is employed to simulate the atomization process. A common rail fuel injector is simulated by using a nozzle geometry available through the Engine Combustion Network. The working conditions correspond to a single orifice (90 μm) JP-8 fueled injector operating at an injection pressure of 90 bar, ambient condition at 29 bar, 300 K filled with 100% nitrogen with Rel = 16,071, Wel = 75,334 setting the spray in the full atomization mode. The experimental dataset from Army Research Lab is used for validation in terms of spray global parameters and local droplet distributions. The quantitative comparison will be presented and discussed. Supported by Oak Ridge Associated Universities and the Army Research Laboratory.

  19. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  20. SU-E-T-656: Quantitative Analysis of Proton Boron Fusion Therapy (PBFT) in Various Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, D; Jung, J; Shin, H

    2015-06-15

    Purpose: Three alpha particles are concomitant of proton boron interaction, which can be used in radiotherapy applications. We performed simulation studies to determine the effectiveness of proton boron fusion therapy (PBFT) under various conditions. Methods: Boron uptake regions (BURs) of various widths and densities were implemented in Monte Carlo n-particle extended (MCNPX) simulation code. The effect of proton beam energy was considered for different BURs. Four simulation scenarios were designed to verify the effectiveness of integrated boost that was observed in the proton boron reaction. In these simulations, the effect of proton beam energy was determined for different physical conditions,more » such as size, location, and boron concentration. Results: Proton dose amplification was confirmed for all proton beam energies considered (< 96.62%). Based on the simulation results for different physical conditions, the threshold for the range in which proton dose amplification occurred was estimated as 0.3 cm. Effective proton boron reaction requires the boron concentration to be equal to or greater than 14.4 mg/g. Conclusion: We established the effects of the PBFT with various conditions by using Monte Carlo simulation. The results of our research can be used for providing a PBFT dose database.« less

  1. Use of an Accurate DNS Particulate Flow Method to Supply and Validate Boundary Conditions for the MFIX Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhi-Gang Feng

    2012-05-31

    The simulation of particulate flows for industrial applications often requires the use of two-fluid models, where the solid particles are considered as a separate continuous phase. One of the underlining uncertainties in the use of the two-fluid models in multiphase computations comes from the boundary condition of the solid phase. Typically, the gas or liquid fluid boundary condition at a solid wall is the so called no-slip condition, which has been widely accepted to be valid for single-phase fluid dynamics provided that the Knudsen number is low. However, the boundary condition for the solid phase is not well understood. Themore » no-slip condition at a solid boundary is not a valid assumption for the solid phase. Instead, several researchers advocate a slip condition as a more appropriate boundary condition. However, the question on the selection of an exact slip length or a slip velocity coefficient is still unanswered. Experimental or numerical simulation data are needed in order to determinate the slip boundary condition that is applicable to a two-fluid model. The goal of this project is to improve the performance and accuracy of the boundary conditions used in two-fluid models such as the MFIX code, which is frequently used in multiphase flow simulations. The specific objectives of the project are to use first principles embedded in a validated Direct Numerical Simulation particulate flow numerical program, which uses the Immersed Boundary method (DNS-IB) and the Direct Forcing scheme in order to establish, modify and validate needed energy and momentum boundary conditions for the MFIX code. To achieve these objectives, we have developed a highly efficient DNS code and conducted numerical simulations to investigate the particle-wall and particle-particle interactions in particulate flows. Most of our research findings have been reported in major conferences and archived journals, which are listed in Section 7 of this report. In this report, we will present a brief description of these results.« less

  2. Computational Fluid Dynamics Simulation Study of Active Power Control in Wind Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleming, Paul; Aho, Jake; Gebraad, Pieter

    2016-08-01

    This paper presents an analysis performed on a wind plant's ability to provide active power control services using a high-fidelity computational fluid dynamics-based wind plant simulator. This approach allows examination of the impact on wind turbine wake interactions within a wind plant on performance of the wind plant controller. The paper investigates several control methods for improving performance in waked conditions. One method uses wind plant wake controls, an active field of research in which wind turbine control systems are coordinated to account for their wakes, to improve the overall performance. Results demonstrate the challenge of providing active power controlmore » in waked conditions but also the potential methods for improving this performance.« less

  3. Sublethal effects of catch-and-release fishing: measuring capture stress, fish impairment, and predation risk using a condition index

    USGS Publications Warehouse

    Campbell, Matthew D.; Patino, Reynaldo; Tolan, J.M.; Strauss, R.E.; Diamond, S.

    2009-01-01

    The sublethal effects of simulated capture of red snapper (Lutjanus campechanus) were analysed using physiological responses, condition indexing, and performance variables. Simulated catch-and-release fishing included combinations of depth of capture and thermocline exposure reflective of environmental conditions experienced in the Gulf of Mexico. Frequency of occurrence of barotrauma and lack of reflex response exhibited considerable individual variation. When combined into a single condition or impairment index, individual variation was reduced, and impairment showed significant increases as depth increased and with the addition of thermocline exposure. Performance variables, such as burst swimming speed (BSS) and simulated predator approach distance (AD), were also significantly different by depth. BSSs and predator ADs decreased with increasing depth, were lowest immediately after release, and were affected for up to 15 min, with longer recovery times required as depth increased. The impairment score developed was positively correlated with cortisol concentration and negatively correlated with both BSS and simulated predator AD. The impairment index proved to be an efficient method to estimate the overall impairment of red snapper in the laboratory simulations of capture and shows promise for use in field conditions, to estimate release mortality and vulnerability to predation.

  4. A comparison of numerical methods for the prediction of two-dimensional heat transfer in an electrothermal deicer pad. M.S. Thesis. Final Contractor Report

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    1988-01-01

    Transient, numerical simulations of the deicing of composite aircraft components by electrothermal heating have been performed in a 2-D rectangular geometry. Seven numerical schemes and four solution methods were used to find the most efficient numerical procedure for this problem. The phase change in the ice was simulated using the Enthalpy method along with the Method for Assumed States. Numerical solutions illustrating deicer performance for various conditions are presented. Comparisons are made with previous numerical models and with experimental data. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.

  5. Kinetic Monte Carlo Method for Rule-based Modeling of Biochemical Networks

    PubMed Central

    Yang, Jin; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.

    2009-01-01

    We present a kinetic Monte Carlo method for simulating chemical transformations specified by reaction rules, which can be viewed as generators of chemical reactions, or equivalently, definitions of reaction classes. A rule identifies the molecular components involved in a transformation, how these components change, conditions that affect whether a transformation occurs, and a rate law. The computational cost of the method, unlike conventional simulation approaches, is independent of the number of possible reactions, which need not be specified in advance or explicitly generated in a simulation. To demonstrate the method, we apply it to study the kinetics of multivalent ligand-receptor interactions. We expect the method will be useful for studying cellular signaling systems and other physical systems involving aggregation phenomena. PMID:18851068

  6. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  7. The Effect of Simulation on Middle School Students' Perceptions of Classroom Activities and Their Foreign Language Achievement: A Mixed-Methods Approach

    ERIC Educational Resources Information Center

    Sharifi, Akram; Ghanizadeh, Afsaneh; Jahedizadeh, Safoura

    2017-01-01

    The present study delved into a language learning model in the domain of English as a foreign language (EFL), i.e., simulation. The term simulation is used to describe the activity of producing conditions which are similar to real ones. We hypothesized that simulation plays a role in middle school students' perceptions of classroom activities…

  8. A new algorithm for modeling friction in dynamic mechanical systems

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1988-01-01

    A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.

  9. Simulation of weak polyelectrolytes: a comparison between the constant pH and the reaction ensemble method

    NASA Astrophysics Data System (ADS)

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-03-01

    The reaction ensemble and the constant pH method are well-known chemical equilibrium approaches to simulate protonation and deprotonation reactions in classical molecular dynamics and Monte Carlo simulations. In this article, we demonstrate the similarity between both methods under certain conditions. We perform molecular dynamics simulations of a weak polyelectrolyte in order to compare the titration curves obtained by both approaches. Our findings reveal a good agreement between the methods when the reaction ensemble is used to sweep the reaction constant. Pronounced differences between the reaction ensemble and the constant pH method can be observed for stronger acids and bases in terms of adaptive pH values. These deviations are due to the presence of explicit protons in the reaction ensemble method which induce a screening of electrostatic interactions between the charged titrable groups of the polyelectrolyte. The outcomes of our simulation hint to a better applicability of the reaction ensemble method for systems in confined geometries and titrable groups in polyelectrolytes with different pKa values.

  10. Simulation of the turbulent Rayleigh-Benard problem using a spectral/finite difference technique

    NASA Technical Reports Server (NTRS)

    Eidson, T. M.; Hussaini, M. Y.; Zang, T. A.

    1986-01-01

    The three-dimensional, incompressible Navier-Stokes and energy equations with the Bousinesq assumption have been directly simulated at a Rayleigh number of 3.8 x 10 to the 5th power and a Prandtl number of 0.76. In the vertical direction, wall boundaries were used and in the horizontal, periodic boundary conditions were used. A spectral/finite difference numerical method was used to simulate the flow. The flow at these conditions is turbulent and a sufficiently fine mesh was used to capture all relevant flow scales. The results of the simulation are compared to experimental data to justify the conclusion that the small scale motion is adequately resolved.

  11. Effects of welding technology on welding stress based on the finite element method

    NASA Astrophysics Data System (ADS)

    Fu, Jianke; Jin, Jun

    2017-01-01

    Finite element method is used to simulate the welding process under four different conditions of welding flat butt joints. Welding seams are simulated with birth and death elements. The size and distribution of welding residual stress is obtained in the four kinds of welding conditions by Q345 manganese steel plate butt joint of the work piece. The results shown that when using two-layers welding,the longitudinal and transverse residual stress were reduced;When welding from Middle to both sides,the residual stress distribution will change,and the residual stress in the middle of the work piece was reduced.

  12. Dynamical mechanical characteristic simulation and analysis of the low voltage switch under vibration and shock conditions

    NASA Astrophysics Data System (ADS)

    Miao, Xiaodan; Han, Feng

    2017-04-01

    The low voltage switch has widely application especially in the hostile environment such as large vibration and shock conditions. In order to ensure the validity of the switch in the hostile environment, it is necessary to predict its mechanical characteristic. In traditional method, the complex and expensive testing system is build up to verify its validity. This paper presented a method based on finite element analysis to predict the dynamic mechanical characteristic of the switch by using ANSYS software. This simulation could provide the basis for the design and optimization of the switch to shorten the design process to improve the product efficiency.

  13. Analysis Of Dynamic Interactions Between Solar Array Simulators And Spacecraft Power Conditioning And Distribution Units

    NASA Astrophysics Data System (ADS)

    Valdivia, V.; Barrado, A.; Lazaro, A.; Rueda, P.; Tonicello, F.; Fernandez, A.; Mourra, O.

    2011-10-01

    Solar array simulators (SASs) are hardware devices, commonly applied instead of actual solar arrays (SAs) during the design process of spacecrafts power conditioning and distribution units (PCDUs), and during spacecrafts assembly integration and tests. However, the dynamic responses between SASs and actual SAs are usually different. This fact plays an important role, since the dynamic response of the SAS may influence significantly the dynamic behaviour of the PCDU under certain conditions, even leading to instability. This paper deals with the dynamic interactions between SASs and PCDUs. Several methods for dynamic characterization of the SASs are discussed, and the response of commercial SASs widely applied in the space industry is compared to that of actual SAs. After that, the interactions are experimentally analyzed by using a boost converter connected to the aforementioned SASs, thus demonstrating their critical importance. The interactions are first tackled analytically by means of small-signal models, and finally a black-box modelling method of SASs is proposed as a useful tool to analyze the interactions by means of simulation. The capabilities of both the analytical method and the black- box model to predict the interactions are demonstrated.

  14. The Investigation of Ghost Fluid Method for Simulating the Compressible Two-Medium Flow

    NASA Astrophysics Data System (ADS)

    Lu, Hai Tian; Zhao, Ning; Wang, Donghong

    2016-06-01

    In this paper, we investigate the conservation error of the two-dimensional compressible two-medium flow simulated by the front tracking method. As the improved versions of the original ghost fluid method, the modified ghost fluid method and the real ghost fluid method are selected to define the interface boundary conditions, respectively, to show different effects on the conservation error. A Riemann problem is constructed along the normal direction of the interface in the front tracking method, with the goal of obtaining an efficient procedure to track the explicit sharp interface precisely. The corresponding Riemann solutions are also used directly in these improved ghost fluid methods. Extensive numerical examples including the sod tube and the shock-bubble interaction are tested to calculate the conservation error. It is found that these two ghost fluid methods have distinctive performances for different initial conditions of the flow field, and the related conclusions are made to suggest the best choice for the combination.

  15. A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina

    2010-08-26

    In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries.more » The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.« less

  16. Simulation of High-Beta Plasma Confinement

    NASA Astrophysics Data System (ADS)

    Font, Gabriel; Welch, Dale; Mitchell, Robert; McGuire, Thomas

    2017-10-01

    The Lockheed Martin Compact Fusion Reactor concept utilizes magnetic cusps to confine the plasma. In order to minimize losses through the axial and ring cusps, the plasma is pushed to a high-beta state. Simulations were made of the plasma and magnetic field system in an effort to quantify particle confinement times and plasma behavior characteristics. Computations are carried out with LSP using implicit PIC methods. Simulations of different sub-scale geometries at high-Beta fusion conditions are used to determine particle loss scaling with reactor size, plasma conditions, and gyro radii. ©2017 Lockheed Martin Corporation. All Rights Reserved.

  17. General framework for constraints in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Kneller, Gerald R.

    2017-06-01

    The article presents a theoretical framework for molecular dynamics simulations of complex systems subject to any combination of holonomic and non-holonomic constraints. Using the concept of constrained inverse matrices both the particle accelerations and the associated constraint forces can be determined from given external forces and kinematical conditions. The formalism enables in particular the construction of explicit kinematical conditions which lead to the well-known Nosé-Hoover type equations of motion for the simulation of non-standard molecular dynamics ensembles. Illustrations are given for a few examples and an outline is presented for a numerical implementation of the method.

  18. Better Than Nothing: A Rational Approach for Minimizing the Impact of Outflow Strategy on Cerebrovascular Simulations.

    PubMed

    Chnafa, C; Brina, O; Pereira, V M; Steinman, D A

    2018-02-01

    Computational fluid dynamics simulations of neurovascular diseases are impacted by various modeling assumptions and uncertainties, including outlet boundary conditions. Many studies of intracranial aneurysms, for example, assume zero pressure at all outlets, often the default ("do-nothing") strategy, with no physiological basis. Others divide outflow according to the outlet diameters cubed, nominally based on the more physiological Murray's law but still susceptible to subjective choices about the segmented model extent. Here we demonstrate the limitations and impact of these outflow strategies, against a novel "splitting" method introduced here. With our method, the segmented lumen is split into its constituent bifurcations, where flow divisions are estimated locally using a power law. Together these provide the global outflow rate boundary conditions. The impact of outflow strategy on flow rates was tested for 70 cases of MCA aneurysm with 0D simulations. The impact on hemodynamic indices used for rupture status assessment was tested for 10 cases with 3D simulations. Differences in flow rates among the various strategies were up to 70%, with a non-negligible impact on average and oscillatory wall shear stresses in some cases. Murray-law and splitting methods gave flow rates closest to physiological values reported in the literature; however, only the splitting method was insensitive to arbitrary truncation of the model extent. Cerebrovascular simulations can depend strongly on the outflow strategy. The default zero-pressure method should be avoided in favor of Murray-law or splitting methods, the latter being released as an open-source tool to encourage the standardization of outflow strategies. © 2018 by American Journal of Neuroradiology.

  19. Parameter Estimation for a Pulsating Turbulent Buoyant Jet Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Christopher, Jason; Wimer, Nicholas; Lapointe, Caelan; Hayden, Torrey; Grooms, Ian; Rieker, Greg; Hamlington, Peter

    2017-11-01

    Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other ``truth'' data to be used for the prediction of unknown parameters, such as flow properties and boundary conditions, in numerical simulations of real-world engineering systems. Here we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a direct numerical simulation (DNS) with known boundary conditions and problem parameters, while the ABC procedure utilizes lower fidelity large eddy simulations. Using spatially-sparse statistics from the 2D buoyant jet DNS, we show that the ABC method provides accurate predictions of true jet inflow parameters. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for predicting flow information, such as boundary conditions, that can be difficult to determine experimentally.

  20. A novel variable-gravity simulation method: potential for astronaut training.

    PubMed

    Sussingham, J C; Cocks, F H

    1995-11-01

    Zero gravity conditions for astronaut training have traditionally used neutral buoyancy tanks, and with such tanks hypogravity conditions are produced by the use of supplemental weights. This technique does not allow for the influence of water viscosity on any reduced gravity exercise regime. With a water-foam fluid produced by using a microbubble air flow together with surface active agents to prevent bubble agglomeration, it has been found possible to simulate a range of gravity conditions without the need for supplemental weights and additionally with a substantial reduction in the resulting fluid viscosity. This new technique appears to have application in improving the simulation environment for astronaut training under the reduced gravity conditions to be found on the moon or on Mars, and may have terrestrial applications in patient rehabilitation and exercise as well.

  1. A method for three-dimensional modeling of wind-shear environments for flight simulator applications

    NASA Technical Reports Server (NTRS)

    Bray, R. S.

    1984-01-01

    A computational method for modeling severe wind shears of the type that have been documented during severe convective atmospheric conditions is offered for use in research and training flight simulation. The procedure was developed with the objectives of operational flexibility and minimum computer load. From one to five, simple down burst wind models can be configured and located to produce the wind field desired for specific simulated flight scenarios. A definition of related turbulence parameters is offered as an additional product of the computations. The use of the method to model several documented examples of severe wind shear is demonstrated.

  2. Explicitly represented polygon wall boundary model for the explicit MPS method

    NASA Astrophysics Data System (ADS)

    Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori

    2015-05-01

    This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.

  3. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  4. Solution-limited time stepping method and numerical simulation of single-element rocket engine combustor

    NASA Astrophysics Data System (ADS)

    Lian, Chenzhou

    The focus of the research is to gain a better understanding of the mixing and combustion of propellants in a confined single element rocket engine combustor. The approach taken is to use the unsteady computational simulations of both liquid and gaseous oxygen reacting with gaseous hydrogen to study the effects of transient processes, recirculation regions and density variations under supercritical conditions. The physics of combustion involve intimate coupling between fluid dynamics, chemical kinetics and intense energy release and take place over an exceptionally wide range of scales. In the face of these monumental challenges, it remains the engineer's task to find acceptable simulation approach and reliable CFD algorithm for combustion simulations. To provide the computational robustness to allow detailed analyses of such complex problems, we start by investigating a method for enhancing the reliability of implicit computational algorithms and decreasing their sensitivity to initial conditions without adversely impacting their efficiency. Efficient convergence is maintained by specifying a large global CFL number while reliability is improved by limiting the local CFL number such that the solution change in any cell is less than a specified tolerance. The magnitude of the solution change is estimated from the calculated residual in a manner that requires negligible computational time. The method precludes unphysical excursions in Newton-like iterations in highly non-linear regions where Jacobians are changing rapidly as well as non-physical results during the computation. The method is tested against a series of problems to identify its characteristics and to verify the approach. The results reveal a substantial improvement in convergence reliability of implicit CFD applications that enables computations starting from simple initial conditions. The method is applied in the unsteady combustion simulations and allows long time running of the code without user intervention. The initial transient leading to stationary conditions in unsteady combustion simulations is investigated by considering flow establishment in model combustors. The duration of the transient is shown to be dependent on the characteristic turn-over time for recirculation zones and the time for the chamber pressure to reach steady conditions. Representative comparisons of the time-averaged, stationary results with experiment are presented to document the computations. The flow dynamics and combustion for two sizes of chamber diameters and two different wall thermal boundary conditions are investigated to assess the role of the recirculation regions on the mixing/combustion process in rocket engine combustors. Results are presented in terms of both instantaneous and time-averaged solutions. As a precursor to liquid oxygen/gaseous hydrogen (LO2/GH 2) combustion simulations, the evolution of a liquid nitrogen (LN 2) jet initially at a subcritical temperature and injected into a supercritical environment is first investigated and the results are validated against experimental data. Unsteady simulations of non-reacting LO2/GH 2 are then performed for a single element shear coaxial injector. These cold flow calculations are then extended to reacting LO2/GH 2 flows to demonstrate the capability of the numerical procedure for high-density-gradient supercritical reacting flows.

  5. Laboratory simulation of the astrophysical burst processes in non-uniform magnetised media

    NASA Astrophysics Data System (ADS)

    Antonov, V. M.; Zakharov, Yu. P.; Orishich, A. M.; Ponomarenko, A. G.; Posukh, V. G.; Snytnikov, V. N.; Stoyanovsky, V. O.

    1990-08-01

    Under various astrophysical conditions the dynamics of nonstationary burst processes with mass and energy release may be defined by the inhomogeneity of the surrounding medium. In the presence of external magnetic field such a problem in general case becomes a three dimensional one and very complicated both from the observable and theoretical point of view (including the computer simulation method). The application of the laboratory simulation methods in such kinds of problems therefore seems to be rather promising and is demonstrated, mainly on the example of peculiar supernova.

  6. Computational simulation of laser heat processing of materials

    NASA Astrophysics Data System (ADS)

    Shankar, Vijaya; Gnanamuthu, Daniel

    1987-04-01

    A computational model simulating the laser heat treatment of AISI 4140 steel plates with a CW CO2 laser beam has been developed on the basis of the three-dimensional, time-dependent heat equation (subject to the appropriate boundary conditions). The solution method is based on Newton iteration applied to a triple-approximate factorized form of the equation. The method is implicit and time-accurate; the maintenance of time-accuracy in the numerical formulation is noted to be critical for the simulation of finite length workpieces with a finite laser beam dwell time.

  7. Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Power, Jonathan D.; LeMaster, Daniel A.; Droege, Douglas R.; Gladysz, Szymon; Bose-Pillai, Santasri

    2017-07-01

    We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.

  8. Conditioning 3D object-based models to dense well data

    NASA Astrophysics Data System (ADS)

    Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.

    2018-06-01

    Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.

  9. Error estimation for CFD aeroheating prediction under rarefied flow condition

    NASA Astrophysics Data System (ADS)

    Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

    2014-12-01

    Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.

  10. Many-body kinetics of dynamic nuclear polarization by the cross effect

    NASA Astrophysics Data System (ADS)

    Karabanov, A.; Wiśniewski, D.; Raimondi, F.; Lesanovsky, I.; Köckenberger, W.

    2018-03-01

    Dynamic nuclear polarization (DNP) is an out-of-equilibrium method for generating nonthermal spin polarization which provides large signal enhancements in modern diagnostic methods based on nuclear magnetic resonance. A particular instance is cross-effect DNP, which involves the interaction of two coupled electrons with the nuclear spin ensemble. Here we develop a theory for this important DNP mechanism and show that the nonequilibrium nuclear polarization buildup is effectively driven by three-body incoherent Markovian dissipative processes involving simultaneous state changes of two electrons and one nucleus. We identify different parameter regimes for effective polarization transfer and discuss under which conditions the polarization dynamics can be simulated by classical kinetic Monte Carlo methods. Our theoretical approach allows simulations of the polarization dynamics on an individual spin level for ensembles consisting of hundreds of nuclear spins. The insight obtained by these simulations can be used to find optimal experimental conditions for cross-effect DNP and to design tailored radical systems that provide optimal DNP efficiency.

  11. High Energy Boundary Conditions for a Cartesian Mesh Euler Solver

    NASA Technical Reports Server (NTRS)

    Pandya, Shishir; Murman, Scott; Aftosmis, Michael

    2003-01-01

    Inlets and exhaust nozzles are common place in the world of flight. Yet, many aerodynamic simulation packages do not provide a method of modelling such high energy boundaries in the flow field. For the purposes of aerodynamic simulation, inlets and exhausts are often fared over and it is assumed that the flow differences resulting from this assumption are minimal. While this is an adequate assumption for the prediction of lift, the lack of a plume behind the aircraft creates an evacuated base region thus effecting both drag and pitching moment values. In addition, the flow in the base region is often mis-predicted resulting in incorrect base drag. In order to accurately predict these quantities, a method for specifying inlet and exhaust conditions needs to be available in aerodynamic simulation packages. A method for a first approximation of a plume without accounting for chemical reactions is added to the Cartesian mesh based aerodynamic simulation package CART3D. The method consists of 3 steps. In the first step, a components approach where each triangle is assigned a component number is used. Here, a method for marking the inlet or exhaust plane triangles as separate components is discussed. In step two, the flow solver is modified to accept a reference state for the components marked inlet or exhaust. In the third step, the flow solver uses these separated components and the reference state to compute the correct flow condition at that triangle. The present method is implemented in the CART3D package which consists of a set of tools for generating a Cartesian volume mesh from a set of component triangulations. The Euler equations are solved on the resulting unstructured Cartesian mesh. The present methods is implemented in this package and its usefulness is demonstrated with two validation cases. A generic missile body is also presented to show the usefulness of the method on a real world geometry.

  12. Fluid Dynamics of the Generation and Transmission of Heart Sounds: (2): Direct Simulation using a Coupled Hemo-Elastodynamic Method

    NASA Astrophysics Data System (ADS)

    Seo, Jung-Hee; Bakhshaee, Hani; Zhu, Chi; Mittal, Rajat

    2015-11-01

    Patterns of blood flow associated with abnormal heart conditions generate characteristic sounds that can be measured on the chest surface using a stethoscope. This technique of `cardiac auscultation' has been used effectively for over a hundred years to diagnose heart conditions, but the mechanisms that generate heart sounds, as well as the physics of sound transmission through the thorax, are not well understood. Here we present a new computational method for simulating the physics of heart murmur generation and transmission and use it to simulate the murmurs associated with a modeled aortic stenosis. The flow in the model aorta is simulated by the incompressible Navier-Stokes equations and the three-dimensional elastic wave generation and propagation on the surrounding viscoelastic structure are solved with a high-order finite difference method in the time domain. The simulation results are compared with experimental measurements and show good agreement. The present study confirms that the pressure fluctuations on the vessel wall are the source of these heart murmurs, and both compression and shear waves likely plan an important role in cardiac auscultation. Supported by the NSF Grants IOS-1124804 and IIS-1344772, Computational resource by XSEDE NSF grant TG-CTS100002.

  13. A Multi-domain Spectral Method for Supersonic Reactive Flows

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Gottlieb, David; Jung, Jae-Hun; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper has a dual purpose: it presents a multidomain Chebyshev method for the solution of the two-dimensional reactive compressible Navier-Stokes equations, and it reports the results of the application of this code to the numerical simulations of high Mach number reactive flows in recessed cavity. The computational method utilizes newly derived interface boundary conditions as well as an adaptive filtering technique to stabilize the computations. The results of the simulations are relevant to recessed cavity flameholders.

  14. Interior and exterior ballistics coupled optimization with constraints of attitude control and mechanical-thermal conditions

    NASA Astrophysics Data System (ADS)

    Liang, Xin-xin; Zhang, Nai-min; Zhang, Yan

    2016-07-01

    For solid launch vehicle performance promotion, a modeling method of interior and exterior ballistics associated optimization with constraints of attitude control and mechanical-thermal condition is proposed. Firstly, the interior and external ballistic models of the solid launch vehicle are established, and the attitude control model of the high wind area and the stage of the separation is presented, and the load calculation model of the drag reduction device is presented, and thermal condition calculation model of flight is presented. Secondly, the optimization model is established to optimize the range, which has internal and external ballistic design parameters as variables selected by sensitivity analysis, and has attitude control and mechanical-thermal conditions as constraints. Finally, the method is applied to the optimal design of a three stage solid launch vehicle simulation with differential evolution algorithm. Simulation results are shown that range capability is improved by 10.8%, and both attitude control and mechanical-thermal conditions are satisfied.

  15. Applications of Simulator Freeze to Carrier Guideslope Tracking Instruction. Cooperative Study Series. Final Report, May 1, 1980-August 31, 1981.

    ERIC Educational Resources Information Center

    Hughes, R. G.; And Others

    Twenty-five experienced F-4 and F-16 Air Force pilots were instructed in carrier landings in the Visual Technology Research Simulator (VTRS). The training was conducted under three instructional conditions, two of which employed the simulator's "freeze" feature. Additionally, two methods of defining errors for carrier glideslope tracking…

  16. Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)

    DTIC Science & Technology

    2016-09-17

    test machine. Experimental data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain...data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain conditions. Optimization methods...be used directly in finite element simulations of more complex geometries. Keywords Axial/torsional experimentation • Plasticity • Constitutive model

  17. Design of a dynamic optical tissue phantom to model extravasation pharmacokinetics

    NASA Astrophysics Data System (ADS)

    Zhang, Jane Y.; Ergin, Aysegul; Andken, Kerry Lee; Sheng, Chao; Bigio, Irving J.

    2010-02-01

    We describe an optical tissue phantom that enables the simulation of drug extravasation from microvessels and validates computational compartmental models of drug delivery. The phantom consists of a microdialysis tubing bundle to simulate the permeable blood vessels, immersed in either an aqueous suspension of titanium dioxide (TiO2) or a TiO2 mixed agarose scattering medium. Drug administration is represented by a dye circulated through this porous microdialysis tubing bundle. Optical pharmacokinetic (OP) methods are used to measure changes in the absorption coefficient of the scattering medium due to the arrival and diffusion of the dye. We have established particle sizedependent concentration profiles over time of phantom drug delivery by intravenous (IV) and intra-arterial (IA) routes. Additionally, pharmacokinetic compartmental models are implemented in computer simulations for the conditions studied within the phantom. The simulated concentration-time profiles agree well with measurements from the phantom. The results are encouraging for future optical pharmacokinetic method development, both physical and computational, to understand drug extravasation under various physiological conditions.

  18. A study of hierarchical clustering of galaxies in an expanding universe

    NASA Astrophysics Data System (ADS)

    Porter, D. H.

    The nonlinear hierarchical clustering of galaxies in an Einstein-deSitter (Omega = 1), initially white noise mass fluctuations (n = 0) model universe is investigated and shown to be in contradiction with previous results. The model is done in terms of an 11,000-body numerical simulation. The independent statics of 0.72 million particles are used to simulte the boundary conditions. A new method for integrating the Newtonian N-body gravity equations, which has controllable accuracy, incorporates a recursive center of mass reduction, and regularizes two body encounters is used to do the simulation. The coordinate system used here is well suited for the investigation of galaxy clustering, incorporating the independent positions and velocities of an arbitrary number of particles into a logarithmic hierarchy of center of mass nodes. The boundary for the simulation is created by using this hierarchy to map the independent statics of 0.72 million particles into just 4,000 particles. This method for simulating the boundary conditions also has controllable accuracy.

  19. Numerical Simulation of Evacuation Process in Malaysia By Using Distinct-Element-Method Based Multi-Agent Model

    NASA Astrophysics Data System (ADS)

    Abustan, M. S.; Rahman, N. A.; Gotoh, H.; Harada, E.; Talib, S. H. A.

    2016-07-01

    In Malaysia, not many researches on crowd evacuation simulation had been reported. Hence, the development of numerical crowd evacuation process by taking into account people behavioral patterns and psychological characteristics is crucial in Malaysia. On the other hand, tsunami disaster began to gain attention of Malaysian citizens after the 2004 Indian Ocean Tsunami that need quick evacuation process. In relation to the above circumstances, we have conducted simulations of tsunami evacuation process at the Miami Beach of Penang Island by using Distinct Element Method (DEM)-based crowd behavior simulator. The main objectives are to investigate and reproduce current conditions of evacuation process at the said locations under different hypothetical scenarios for the efficiency study of the evacuation. The sim-1 is initial condition of evacuation planning while sim-2 as improvement of evacuation planning by adding new evacuation area. From the simulation result, sim-2 have a shorter time of evacuation process compared to the sim-1. The evacuation time recuded 53 second. The effect of the additional evacuation place is confirmed from decreasing of the evacuation completion time. Simultaneously, the numerical simulation may be promoted as an effective tool in studying crowd evacuation process.

  20. Real-time simulation of contact and cutting of heterogeneous soft-tissues.

    PubMed

    Courtecuisse, Hadrien; Allard, Jérémie; Kerfriden, Pierre; Bordas, Stéphane P A; Cotin, Stéphane; Duriez, Christian

    2014-02-01

    This paper presents a numerical method for interactive (real-time) simulations, which considerably improves the accuracy of the response of heterogeneous soft-tissue models undergoing contact, cutting and other topological changes. We provide an integrated methodology able to deal both with the ill-conditioning issues associated with material heterogeneities, contact boundary conditions which are one of the main sources of inaccuracies, and cutting which is one of the most challenging issues in interactive simulations. Our approach is based on an implicit time integration of a non-linear finite element model. To enable real-time computations, we propose a new preconditioning technique, based on an asynchronous update at low frequency. The preconditioner is not only used to improve the computation of the deformation of the tissues, but also to simulate the contact response of homogeneous and heterogeneous bodies with the same accuracy. We also address the problem of cutting the heterogeneous structures and propose a method to update the preconditioner according to the topological modifications. Finally, we apply our approach to three challenging demonstrators: (i) a simulation of cataract surgery (ii) a simulation of laparoscopic hepatectomy (iii) a brain tumor surgery. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Design and numerical simulation on an auto-cumulative flowmeter in horizontal oil-water two-phase flow

    NASA Astrophysics Data System (ADS)

    Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang

    2017-11-01

    In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.

  2. Design and numerical simulation on an auto-cumulative flowmeter in horizontal oil-water two-phase flow.

    PubMed

    Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang

    2017-11-01

    In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.

  3. Calculation of steady and unsteady transonic flow using a Cartesian mesh and gridless boundary conditions with application to aeroelasticity

    NASA Astrophysics Data System (ADS)

    Kirshman, David

    A numerical method for the solution of inviscid compressible flow using an array of embedded Cartesian meshes in conjunction with gridless surface boundary conditions is developed. The gridless boundary treatment is implemented by means of a least squares fitting of the conserved flux variables using a cloud of nodes in the vicinity of the surface geometry. The method allows for accurate treatment of the surface boundary conditions using a grid resolution an order of magnitude coarser than required of typical Cartesian approaches. Additionally, the method does not suffer from issues associated with thin body geometry or extremely fine cut cells near the body. Unlike some methods that consider a gridless (or "meshless") treatment throughout the entire domain, multi-grid acceleration can be effectively incorporated and issues associated with global conservation are alleviated. The "gridless" surface boundary condition provides for efficient and simple problem set up since definition of the body geometry is generated independently from the field mesh, and automatically incorporated into the field discretization of the domain. The applicability of the method is first demonstrated for steady flow of single and multi-element airfoil configurations. Using this method, comparisons with traditional body-fitted grid simulations reveal that steady flow solutions can be obtained accurately with minimal effort associated with grid generation. The method is then extended to unsteady flow predictions. In this application, flow field simulations for the prescribed oscillation of an airfoil indicate excellent agreement with experimental data. Furthermore, it is shown that the phase lag associated with shock oscillation is accurately predicted without the need for a deformable mesh. Lastly, the method is applied to the prediction of transonic flutter using a two-dimensional wing model, in which comparisons with moving mesh simulations yield nearly identical results. As a result, applicability of the method to transient and vibrating fluid-structure interaction problems is established in which the requirement for a deformable mesh is eliminated.

  4. An immersed boundary-lattice Boltzmann model for biofilm growth and its impact on the NAPL dissolution in porous media

    NASA Astrophysics Data System (ADS)

    Benioug, M.; Yang, X.

    2017-12-01

    The evolution of microbial phase within porous medium is a complex process that involves growth, mortality, and detachment of the biofilm or attachment of moving cells. A better understanding of the interactions among biofilm growth, flow and solute transport and a rigorous modeling of such processes are essential for a more accurate prediction of the fate of pollutants (e.g. NAPLs) in soils. However, very few works are focused on the study of such processes in multiphase conditions (oil/water/biofilm systems). Our proposed numerical model takes into account the mechanisms that control bacterial growth and its impact on the dissolution of NAPL. An Immersed Boundary - Lattice Boltzmann Model (IB-LBM) is developed for flow simulations along with non-boundary conforming finite volume methods (volume of fluid and reconstruction methods) used for reactive solute transport. A sophisticated cellular automaton model is also developed to describe the spatial distribution of bacteria. A series of numerical simulations have been performed on complex porous media. A quantitative diagram representing the transitions between the different biofilm growth patterns is proposed. The bioenhanced dissolution of NAPL in the presence of biofilms is simulated at the pore scale. A uniform dissolution approach has been adopted to describe the temporal evolution of trapped blobs. Our simulations focus on the dissolution of NAPL in abiotic and biotic conditions. In abiotic conditions, we analyze the effect of the spatial distribution of NAPL blobs on the dissolution rate under different assumptions (blobs size, Péclet number). In biotic conditions, different conditions are also considered (spatial distribution, reaction kinetics, toxicity) and analyzed. The simulated results are consistent with those obtained from the literature.

  5. Headspace Theater: An Innovative Method for Experiential Learning of Psychiatric Symptomatology Using Modified Role-Playing and Improvisational Theater Techniques

    ERIC Educational Resources Information Center

    Ballon, Bruce C.; Silver, Ivan; Fidler, Donald

    2007-01-01

    Objective: Headspace Theater has been developed to allow small group learning of psychiatric conditions by creating role-play situations in which participants are placed in a scenario that simulates the experience of the condition. Method: The authors conducted a literature review of role-playing techniques, interactive teaching, and experiential…

  6. Computational Analysis of Splash Occurring in the Deposition Process in Annular-Mist Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Heng; Koshizuka, Seiichi; Oka, Yoshiaki

    2004-07-01

    The deposition process of a single droplet on the film is numerically simulated by the Moving Particle Semi-implicit (MPS) method to analyze the possibility and effect of splash occurring in the deposition process in BWR condition. The model accounts for the presence of inertial, gravitation, viscous and surface tension and is validated by comparison with experiment results. A simple one-dimensional mixture model is developed to calculate the necessary parameters for the simulation of deposition in BWR condition. The deposition process of a single droplet in BWR condition is simulated. The effect of impact angle of droplet and the velocity ofmore » liquid film are analyzed. A film buffer model is developed to fit the simulation results of critical value for splash. A correlation of critical Weber number for splash in BWR condition is obtained and used to analyze the effect of splash. It is found that the splash play important role in the deposition and re-entrainment process in high quality condition in BWR. The mass fraction of re-entrainment caused by splash in different quality condition is also calculated. (authors)« less

  7. A New Combined Stepwise-Based High-Order Decoupled Direct and Reduced-Form Method To Improve Uncertainty Analysis in PM2.5 Simulations.

    PubMed

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Yuan, Zibing; Russell, Armistead G; Ou, Jiamin; Zhong, Zhuangmin

    2017-04-04

    The traditional reduced-form model (RFM) based on the high-order decoupled direct method (HDDM), is an efficient uncertainty analysis approach for air quality models, but it has large biases in uncertainty propagation due to the limitation of the HDDM in predicting nonlinear responses to large perturbations of model inputs. To overcome the limitation, a new stepwise-based RFM method that combines several sets of local sensitive coefficients under different conditions is proposed. Evaluations reveal that the new RFM improves the prediction of nonlinear responses. The new method is applied to quantify uncertainties in simulated PM 2.5 concentrations in the Pearl River Delta (PRD) region of China as a case study. Results show that the average uncertainty range of hourly PM 2.5 concentrations is -28% to 57%, which can cover approximately 70% of the observed PM 2.5 concentrations, while the traditional RFM underestimates the upper bound of the uncertainty range by 1-6%. Using a variance-based method, the PM 2.5 boundary conditions and primary PM 2.5 emissions are found to be the two major uncertainty sources in PM 2.5 simulations. The new RFM better quantifies the uncertainty range in model simulations and can be applied to improve applications that rely on uncertainty information.

  8. Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the “Large p, Small n” Setting

    PubMed Central

    Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666

  9. Prototype of a computer method for designing and analyzing heating, ventilating and air conditioning proportional, electronic control systems

    NASA Astrophysics Data System (ADS)

    Barlow, Steven J.

    1986-09-01

    The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.

  10. Experimental and numerical investigations on melamine wedges.

    PubMed

    Schneider, S

    2008-09-01

    Melamine wedges are often used as acoustic lining material for anechoic chambers. It was proposed here to study the effects of the mounting conditions on the acoustic properties of the melamine wedges used in the large anechoic chamber at the LMA. The results of the impedance tube measurements carried out show that the mounting conditions must be taken into account when assessing the quality of an acoustic lining. As it can be difficult to simulate these mounting conditions in impedance tube experiments, a numerical method was developed, which can be used to complete the experiments or for parametric studies. By combining the finite and the boundary element method, it is possible to investigate acoustic linings with almost no restrictions as to the geometry, material behavior, or mounting conditions. The numerical method presented here was used to study the acoustic properties of the acoustic lining installed in the anechoic chamber at the LMA. Further experiments showed that the behavior of the melamine foam is anisotropic. Numerical simulations showed that this anisotropy can be used to advantage when designing an acoustic lining.

  11. Bypassing the malfunction junction in warm dense matter simulations

    NASA Astrophysics Data System (ADS)

    Cangi, Attila; Pribram-Jones, Aurora

    2015-03-01

    Simulation of warm dense matter requires computational methods that capture both quantum and classical behavior efficiently under high-temperature and high-density conditions. The state-of-the-art approach to model electrons and ions under those conditions is density functional theory molecular dynamics, but this method's computational cost skyrockets as temperatures and densities increase. We propose finite-temperature potential functional theory as an in-principle-exact alternative that suffers no such drawback. In analogy to the zero-temperature theory developed previously, we derive an orbital-free free energy approximation through a coupling-constant formalism. Our density approximation and its associated free energy approximation demonstrate the method's accuracy and efficiency. A.C. has been partially supported by NSF Grant CHE-1112442. A.P.J. is supported by DOE Grant DE-FG02-97ER25308.

  12. Extended use of two crossed Babinet compensators for wavefront sensing in adaptive optics

    NASA Astrophysics Data System (ADS)

    Paul, Lancelot; Kumar Saxena, Ajay

    2010-12-01

    An extended use of two crossed Babinet compensators as a wavefront sensor for adaptive optics applications is proposed. This method is based on the lateral shearing interferometry technique in two directions. A single record of the fringes in a pupil plane provides the information about the wavefront. The theoretical simulations based on this approach for various atmospheric conditions and other errors of optical surfaces are provided for better understanding of this method. Derivation of the results from a laboratory experiment using simulated atmospheric conditions demonstrates the steps involved in data analysis and wavefront evaluation. It is shown that this method has a higher degree of freedom in terms of subapertures and on the choice of detectors, and can be suitably adopted for real-time wavefront sensing for adaptive optics.

  13. SQUEEZE-E: The Optimal Solution for Molecular Simulations with Periodic Boundary Conditions.

    PubMed

    Wassenaar, Tsjerk A; de Vries, Sjoerd; Bonvin, Alexandre M J J; Bekker, Henk

    2012-10-09

    In molecular simulations of macromolecules, it is desirable to limit the amount of solvent in the system to avoid spending computational resources on uninteresting solvent-solvent interactions. As a consequence, periodic boundary conditions are commonly used, with a simulation box chosen as small as possible, for a given minimal distance between images. Here, we describe how such a simulation cell can be set up for ensembles, taking into account a priori available or estimable information regarding conformational flexibility. Doing so ensures that any conformation present in the input ensemble will satisfy the distance criterion during the simulation. This helps avoid periodicity artifacts due to conformational changes. The method introduces three new approaches in computational geometry: (1) The first is the derivation of an optimal packing of ensembles, for which the mathematical framework is described. (2) A new method for approximating the α-hull and the contact body for single bodies and ensembles is presented, which is orders of magnitude faster than existing routines, allowing the calculation of packings of large ensembles and/or large bodies. 3. A routine is described for searching a combination of three vectors on a discretized contact body forming a reduced base for a lattice with minimal cell volume. The new algorithms reduce the time required to calculate packings of single bodies from minutes or hours to seconds. The use and efficacy of the method is demonstrated for ensembles obtained from NMR, MD simulations, and elastic network modeling. An implementation of the method has been made available online at http://haddock.chem.uu.nl/services/SQUEEZE/ and has been made available as an option for running simulations through the weNMR GRID MD server at http://haddock.science.uu.nl/enmr/services/GROMACS/main.php .

  14. Toxicity of pyrolysis gases from wood

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Huttlinger, N. V.; Oneill, B. A.; Kourtides, D. A.; Parker, J. A.

    1977-01-01

    The toxicity of the pyrolysis gases from nine wood samples was investigated. The samples of hardwoods were aspen poplar, beech, yellow birch, and red oak. The samples of softwoods were western red cedar, Douglas fir, western hemlock, eastern white pine, and southern yellow pine. There was no significant difference between the wood samples under rising temperature conditions, which are intended to simulate a developing fire, or under fixed temperature conditions, which are intended to simulate a fully developed fire. This test method is used to determine whether a material is significantly more toxic than wood under the preflashover conditions of a developing fire.

  15. SIERRA - A 3-D device simulator for reliability modeling

    NASA Astrophysics Data System (ADS)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  16. Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves

    NASA Astrophysics Data System (ADS)

    Liu, Shukui; Papanikolaou, Apostolos D.

    2011-03-01

    Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT) of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.

  17. A fast exact simulation method for a class of Markov jump processes.

    PubMed

    Li, Yao; Hu, Lili

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.

  18. A program code generator for multiphysics biological simulation using markup languages.

    PubMed

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  19. Low temperature simulation of subliming boundary layer flow in Jupiter atmosphere

    NASA Technical Reports Server (NTRS)

    Chen, C. J.

    1976-01-01

    A low-temperature approximate simulation for the sublimation of a graphite heat shield under Jovian entry conditions is studied. A set of algebraic equations is derived to approximate the governing equation and boundary conditions, based on order-of-magnitude analysis. Characteristic quantities such as the wall temperature and the subliming velocity are predicted. Similarity parameters that are needed to simulate the most dominant phenomena of the Jovian entry flow are also given. An approximate simulation of the sublimation of the graphite heat shield is performed with an air-dry-ice model. The simulation with the air-dry-ice model may be carried out experimentally at a lower temperature of 3000 to 6000 K instead of the entry temperature of 14,000 K. The rate of graphite sublimation predicted by the present algebraic approximation agrees to the order of magnitude with extrapolated data. The limitations of the simulation method and its utility are discussed.

  20. New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiss, T.; Chaney, L.; Meyer, J.

    Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less

  1. Experiments and simulations of single shock Richtmeyer-Meshkov Instability with measured, volumetric initial conditions

    NASA Astrophysics Data System (ADS)

    Sewell, Everest; Ferguson, Kevin; Greenough, Jeffrey; Jacobs, Jeffrey

    2014-11-01

    We describe new experiments of single shock Richtmeyer-Meshkov Instability (RMI) performed on the shock tube apparatus at the University of Arizona in which the initial conditions are volumetrically imaged prior to shock wave arrival. Initial perturbation plays a major role in the evolution of RMI, and previous experimental efforts only capture a narrow slice of the initial condition. The method presented uses a rastered laser sheet to capture additional images in the depth of the initial condition shortly before the experimental start time. These images are then used to reconstruct a volumetric approximation of the experimental perturbation, which is simulated using the hydrodynamics code ARES, developed at Lawrence Livermore National Laboratory (LLNL). Comparison is made between the time evolution of the interface width and the mixedness ratio measured from the experiments against the predictions from the numerical simulations.

  2. Neural network simulation of the atmospheric point spread function for the adjacency effect research

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoshan; Wang, Haidong; Li, Ligang; Yang, Zhen; Meng, Xin

    2016-10-01

    Adjacency effect could be regarded as the convolution of the atmospheric point spread function (PSF) and the surface leaving radiance. Monte Carlo is a common method to simulate the atmospheric PSF. But it can't obtain analytic expression and the meaningful results can be only acquired by statistical analysis of millions of data. A backward Monte Carlo algorithm was employed to simulate photon emitting and propagating in the atmosphere under different conditions. The PSF was determined by recording the photon-receiving numbers in fixed bin at different position. A multilayer feed-forward neural network with a single hidden layer was designed to learn the relationship between the PSF's and the input condition parameters. The neural network used the back-propagation learning rule for training. Its input parameters involved atmosphere condition, spectrum range, observing geometry. The outputs of the network were photon-receiving numbers in the corresponding bin. Because the output units were too many to be allowed by neural network, the large network was divided into a collection of smaller ones. These small networks could be ran simultaneously on many workstations and/or PCs to speed up the training. It is important to note that the simulated PSF's by Monte Carlo technique in non-nadir viewing angles are more complicated than that in nadir conditions which brings difficulties in the design of the neural network. The results obtained show that the neural network approach could be very useful to compute the atmospheric PSF based on the simulated data generated by Monte Carlo method.

  3. Identifiability and identification of trace continuous pollutant source.

    PubMed

    Qu, Hongquan; Liu, Shouwen; Pang, Liping; Hu, Tao

    2014-01-01

    Accidental pollution events often threaten people's health and lives, and a pollutant source is very necessary so that prompt remedial actions can be taken. In this paper, a trace continuous pollutant source identification method is developed to identify a sudden continuous emission pollutant source in an enclosed space. The location probability model is set up firstly, and then the identification method is realized by searching a global optimal objective value of the location probability. In order to discuss the identifiability performance of the presented method, a conception of a synergy degree of velocity fields is presented in order to quantitatively analyze the impact of velocity field on the identification performance. Based on this conception, some simulation cases were conducted. The application conditions of this method are obtained according to the simulation studies. In order to verify the presented method, we designed an experiment and identified an unknown source appearing in the experimental space. The result showed that the method can identify a sudden trace continuous source when the studied situation satisfies the application conditions.

  4. Light-Cone Effect of Radiation Fields in Cosmological Radiative Transfer Simulations

    NASA Astrophysics Data System (ADS)

    Ahn, Kyungjin

    2015-02-01

    We present a novel method to implement time-delayed propagation of radiation fields in cosmo-logical radiative transfer simulations. Time-delayed propagation of radiation fields requires construction of retarded-time fields by tracking the location and lifetime of radiation sources along the corresponding light-cones. Cosmological radiative transfer simulations have, until now, ignored this "light-cone effect" or implemented ray-tracing methods that are computationally demanding. We show that radiative trans-fer calculation of the time-delayed fields can be easily achieved in numerical simulations when periodic boundary conditions are used, by calculating the time-discretized retarded-time Green's function using the Fast Fourier Transform (FFT) method and convolving it with the source distribution. We also present a direct application of this method to the long-range radiation field of Lyman-Werner band photons, which is important in the high-redshift astrophysics with first stars.

  5. Simulation of pipeline in the area of the underwater crossing

    NASA Astrophysics Data System (ADS)

    Burkov, P.; Chernyavskiy, D.; Burkova, S.; Konan, E. C.

    2014-08-01

    The article studies stress-strain behavior of the main oil-pipeline section Alexandrovskoye-Anzhero-Sudzhensk using software system Ansys. This method of examination and assessment of technical conditions of objects of pipeline transport studies the objects and the processes that affect the technical condition of these facilities, including the research on the basis of computer simulation. Such approach allows to develop the theory, methods of calculations and designing of objects of pipeline transport, units and parts of machines, regardless of their industry and destination with a view to improve the existing constructions and create new structures, machines of high performance, durability and reliability, maintainability, low material capacity and cost, which have competitiveness on the world market.

  6. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation.

    PubMed

    Breton, S-P; Sumner, J; Sørensen, J N; Hansen, K S; Sarmast, S; Ivanell, S

    2017-04-13

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  7. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation

    PubMed Central

    Sumner, J.; Sørensen, J. N.; Hansen, K. S.; Sarmast, S.; Ivanell, S.

    2017-01-01

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265021

  8. Simulation of gaseous diffusion in partially saturated porous media under variable gravity with lattice Boltzmann methods

    NASA Technical Reports Server (NTRS)

    Chau, Jessica Furrer; Or, Dani; Sukop, Michael C.; Steinberg, S. L. (Principal Investigator)

    2005-01-01

    Liquid distributions in unsaturated porous media under different gravitational accelerations and corresponding macroscopic gaseous diffusion coefficients were investigated to enhance understanding of plant growth conditions in microgravity. We used a single-component, multiphase lattice Boltzmann code to simulate liquid configurations in two-dimensional porous media at varying water contents for different gravity conditions and measured gas diffusion through the media using a multicomponent lattice Boltzmann code. The relative diffusion coefficients (D rel) for simulations with and without gravity as functions of air-filled porosity were in good agreement with measured data and established models. We found significant differences in liquid configuration in porous media, leading to reductions in D rel of up to 25% under zero gravity. The study highlights potential applications of the lattice Boltzmann method for rapid and cost-effective evaluation of alternative plant growth media designs under variable gravity.

  9. The HCUP SID Imputation Project: Improving Statistical Inferences for Health Disparities Research by Imputing Missing Race Data.

    PubMed

    Ma, Yan; Zhang, Wei; Lyman, Stephen; Huang, Yihe

    2018-06-01

    To identify the most appropriate imputation method for missing data in the HCUP State Inpatient Databases (SID) and assess the impact of different missing data methods on racial disparities research. HCUP SID. A novel simulation study compared four imputation methods (random draw, hot deck, joint multiple imputation [MI], conditional MI) for missing values for multiple variables, including race, gender, admission source, median household income, and total charges. The simulation was built on real data from the SID to retain their hierarchical data structures and missing data patterns. Additional predictive information from the U.S. Census and American Hospital Association (AHA) database was incorporated into the imputation. Conditional MI prediction was equivalent or superior to the best performing alternatives for all missing data structures and substantially outperformed each of the alternatives in various scenarios. Conditional MI substantially improved statistical inferences for racial health disparities research with the SID. © Health Research and Educational Trust.

  10. Study on energy saving of subway station based on orthogonal experimental method

    NASA Astrophysics Data System (ADS)

    Guo, Lei

    2017-05-01

    With the characteristics of quick, efficient and large amount transport, the subway has become an important way to solve urban traffic congestion. As the subway environment will follow the change of external environment factors such as temperature and load of personnel changes, three-dimensional numerical simulations study is conducted by using CFD software for air distribution of subway platform. The influence of different loads (the supply air temperature and velocity of air condition, personnel load, heat flux of the wall) on the subway platform flow field are also analysed. The orthogonal experiment method is applied to the numerical simulation analysis for human comfort under different parameters. Based on those results, the functional relationship between human comfort and the boundary conditions of the platform is produced by multiple linear regression fitting method, the order of major boundary conditions which affect human comfort is obtained. The above study provides a theoretical basis for the final energy-saving strategies.

  11. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  12. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  13. Comparing methods for estimation of heterogeneous treatment effects using observational data from health care databases.

    PubMed

    Wendling, T; Jung, K; Callahan, A; Schuler, A; Shah, N H; Gallego, B

    2018-06-03

    There is growing interest in using routinely collected data from health care databases to study the safety and effectiveness of therapies in "real-world" conditions, as it can provide complementary evidence to that of randomized controlled trials. Causal inference from health care databases is challenging because the data are typically noisy, high dimensional, and most importantly, observational. It requires methods that can estimate heterogeneous treatment effects while controlling for confounding in high dimensions. Bayesian additive regression trees, causal forests, causal boosting, and causal multivariate adaptive regression splines are off-the-shelf methods that have shown good performance for estimation of heterogeneous treatment effects in observational studies of continuous outcomes. However, it is not clear how these methods would perform in health care database studies where outcomes are often binary and rare and data structures are complex. In this study, we evaluate these methods in simulation studies that recapitulate key characteristics of comparative effectiveness studies. We focus on the conditional average effect of a binary treatment on a binary outcome using the conditional risk difference as an estimand. To emulate health care database studies, we propose a simulation design where real covariate and treatment assignment data are used and only outcomes are simulated based on nonparametric models of the real outcomes. We apply this design to 4 published observational studies that used records from 2 major health care databases in the United States. Our results suggest that Bayesian additive regression trees and causal boosting consistently provide low bias in conditional risk difference estimates in the context of health care database studies. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Radiation damage buildup by athermal defect reactions in nickel and concentrated nickel alloys

    DOE PAGES

    Zhang, S.; Nordlund, K.; Djurabekova, F.; ...

    2017-04-12

    We develop a new method using binary collision approximation simulating the Rutherford backscattering spectrometry in channeling conditions (RBS/C) from molecular dynamics atom coordinates of irradiated cells. The approach allows comparing experimental and simulated RBS/C signals as a function of depth without fitting parameters. The simulated RBS/C spectra of irradiated Ni and concentrated solid solution alloys (CSAs, NiFe and NiCoCr) show a good agreement with the experimental results. The good agreement indicates the damage evolution under damage overlap conditions in Ni and CSAs at room temperature is dominated by defect recombination and migration induced by irradiation rather than activated thermally.

  15. Effects of Error Experience When Learning to Simulate Hypernasality

    ERIC Educational Resources Information Center

    Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.

    2013-01-01

    Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…

  16. Validation of a Monte Carlo model used for simulating tube current modulation in computed tomography over a wide range of phantom conditions/challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.

    2014-11-01

    Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purposemore » of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.« less

  17. An Immersed Boundary-Lattice Boltzmann Method for Simulating Particulate Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Baili; Cheng, Ming; Lou, Jing

    2013-11-01

    A two-dimensional momentum exchange-based immersed boundary-lattice Boltzmann method developed by X.D. Niu et al. (2006) has been extended in three-dimensions for solving fluid-particles interaction problems. This method combines the most desirable features of the lattice Boltzmann method and the immersed boundary method by using a regular Eulerian mesh for the flow domain and a Lagrangian mesh for the moving particles in the flow field. The non-slip boundary conditions for the fluid and the particles are enforced by adding a force density term into the lattice Boltzmann equation, and the forcing term is simply calculated by the momentum exchange of the boundary particle density distribution functions, which are interpolated by the Lagrangian polynomials from the underlying Eulerian mesh. This method preserves the advantages of lattice Boltzmann method in tracking a group of particles and, at the same time, provides an alternative approach to treat solid-fluid boundary conditions. Numerical validations show that the present method is very accurate and efficient. The present method will be further developed to simulate more complex problems with particle deformation, particle-bubble and particle-droplet interactions.

  18. Nuclear sensor signal processing circuit

    DOEpatents

    Kallenbach, Gene A [Bosque Farms, NM; Noda, Frank T [Albuquerque, NM; Mitchell, Dean J [Tijeras, NM; Etzkin, Joshua L [Albuquerque, NM

    2007-02-20

    An apparatus and method are disclosed for a compact and temperature-insensitive nuclear sensor that can be calibrated with a non-hazardous radioactive sample. The nuclear sensor includes a gamma ray sensor that generates tail pulses from radioactive samples. An analog conditioning circuit conditions the tail-pulse signals from the gamma ray sensor, and a tail-pulse simulator circuit generates a plurality of simulated tail-pulse signals. A computer system processes the tail pulses from the gamma ray sensor and the simulated tail pulses from the tail-pulse simulator circuit. The nuclear sensor is calibrated under the control of the computer. The offset is adjusted using the simulated tail pulses. Since the offset is set to zero or near zero, the sensor gain can be adjusted with a non-hazardous radioactive source such as, for example, naturally occurring radiation and potassium chloride.

  19. Nondestructive evaluation of pavement structural condition for rehabilitation design : final report.

    DOT National Transportation Integrated Search

    2016-05-31

    Falling Weight Deflectometer (FWD) is the common non-destructive testing method for in-situ evaluation of pavement condition. : This study aims to develop finite element (FE) models that can simulate FWD loading on pavement system and capture the : c...

  20. A new equilibrium torus solution and GRMHD initial conditions

    NASA Astrophysics Data System (ADS)

    Penna, Robert F.; Kulkarni, Akshay; Narayan, Ramesh

    2013-11-01

    Context. General relativistic magnetohydrodynamic (GRMHD) simulations are providing influential models for black hole spin measurements, gamma ray bursts, and supermassive black hole feedback. Many of these simulations use the same initial condition: a rotating torus of fluid in hydrostatic equilibrium. A persistent concern is that simulation results sometimes depend on arbitrary features of the initial torus. For example, the Bernoulli parameter (which is related to outflows), appears to be controlled by the Bernoulli parameter of the initial torus. Aims: In this paper, we give a new equilibrium torus solution and describe two applications for the future. First, it can be used as a more physical initial condition for GRMHD simulations than earlier torus solutions. Second, it can be used in conjunction with earlier torus solutions to isolate the simulation results that depend on initial conditions. Methods: We assume axisymmetry, an ideal gas equation of state, constant entropy, and ignore self-gravity. We fix an angular momentum distribution and solve the relativistic Euler equations in the Kerr metric. Results: The Bernoulli parameter, rotation rate, and geometrical thickness of the torus can be adjusted independently. Our torus tends to be more bound and have a larger radial extent than earlier torus solutions. Conclusions: While this paper was in preparation, several GRMHD simulations appeared based on our equilibrium torus. We believe it will continue to provide a more realistic starting point for future simulations.

  1. A Cartesian cut cell method for rarefied flow simulations around moving obstacles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechristé, G., E-mail: Guillaume.Dechriste@math.u-bordeaux1.fr; CNRS, IMB, UMR 5251, F-33400 Talence; Mieussens, L., E-mail: Luc.Mieussens@math.u-bordeaux1.fr

    2016-06-01

    For accurate simulations of rarefied gas flows around moving obstacles, we propose a cut cell method on Cartesian grids: it allows exact conservation and accurate treatment of boundary conditions. Our approach is designed to treat Cartesian cells and various kinds of cut cells by the same algorithm, with no need to identify the specific shape of each cut cell. This makes the implementation quite simple, and allows a direct extension to 3D problems. Such simulations are also made possible by using an adaptive mesh refinement technique and a hybrid parallel implementation. This is illustrated by several test cases, including amore » 3D unsteady simulation of the Crookes radiometer.« less

  2. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    NASA Astrophysics Data System (ADS)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  3. Overcoming the Time Limitation in Molecular Dynamics Simulation of Crystal Nucleation: A Persistent-Embryo Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yang; Song, Huajing; Zhang, Feng

    The crystal nucleation from liquid in most cases is too rare to be accessed within the limited time scales of the conventional molecular dynamics (MD) simulation. Here, we developed a “persistent embryo” method to facilitate crystal nucleation in MD simulations by preventing small crystal embryos from melting using external spring forces. We applied this method to the pure Ni case for a moderate undercooling where no nucleation can be observed in the conventional MD simulation, and obtained nucleation rate in good agreement with the experimental data. Moreover, the method is applied to simulate an even more sluggish event: the nucleationmore » of the B2 phase in a strong glass-forming Cu-Zr alloy. The nucleation rate was found to be 8 orders of magnitude smaller than Ni at the same undercooling, which well explains the good glass formability of the alloy. In conclusion, our work opens a new avenue to study solidification under realistic experimental conditions via atomistic computer simulation.« less

  4. Overcoming the Time Limitation in Molecular Dynamics Simulation of Crystal Nucleation: A Persistent-Embryo Approach

    DOE PAGES

    Sun, Yang; Song, Huajing; Zhang, Feng; ...

    2018-02-23

    The crystal nucleation from liquid in most cases is too rare to be accessed within the limited time scales of the conventional molecular dynamics (MD) simulation. Here, we developed a “persistent embryo” method to facilitate crystal nucleation in MD simulations by preventing small crystal embryos from melting using external spring forces. We applied this method to the pure Ni case for a moderate undercooling where no nucleation can be observed in the conventional MD simulation, and obtained nucleation rate in good agreement with the experimental data. Moreover, the method is applied to simulate an even more sluggish event: the nucleationmore » of the B2 phase in a strong glass-forming Cu-Zr alloy. The nucleation rate was found to be 8 orders of magnitude smaller than Ni at the same undercooling, which well explains the good glass formability of the alloy. In conclusion, our work opens a new avenue to study solidification under realistic experimental conditions via atomistic computer simulation.« less

  5. Overcoming the Time Limitation in Molecular Dynamics Simulation of Crystal Nucleation: A Persistent-Embryo Approach

    NASA Astrophysics Data System (ADS)

    Sun, Yang; Song, Huajing; Zhang, Feng; Yang, Lin; Ye, Zhuo; Mendelev, Mikhail I.; Wang, Cai-Zhuang; Ho, Kai-Ming

    2018-02-01

    The crystal nucleation from liquid in most cases is too rare to be accessed within the limited time scales of the conventional molecular dynamics (MD) simulation. Here, we developed a "persistent embryo" method to facilitate crystal nucleation in MD simulations by preventing small crystal embryos from melting using external spring forces. We applied this method to the pure Ni case for a moderate undercooling where no nucleation can be observed in the conventional MD simulation, and obtained nucleation rate in good agreement with the experimental data. Moreover, the method is applied to simulate an even more sluggish event: the nucleation of the B 2 phase in a strong glass-forming Cu-Zr alloy. The nucleation rate was found to be 8 orders of magnitude smaller than Ni at the same undercooling, which well explains the good glass formability of the alloy. Thus, our work opens a new avenue to study solidification under realistic experimental conditions via atomistic computer simulation.

  6. Overcoming the Time Limitation in Molecular Dynamics Simulation of Crystal Nucleation: A Persistent-Embryo Approach.

    PubMed

    Sun, Yang; Song, Huajing; Zhang, Feng; Yang, Lin; Ye, Zhuo; Mendelev, Mikhail I; Wang, Cai-Zhuang; Ho, Kai-Ming

    2018-02-23

    The crystal nucleation from liquid in most cases is too rare to be accessed within the limited time scales of the conventional molecular dynamics (MD) simulation. Here, we developed a "persistent embryo" method to facilitate crystal nucleation in MD simulations by preventing small crystal embryos from melting using external spring forces. We applied this method to the pure Ni case for a moderate undercooling where no nucleation can be observed in the conventional MD simulation, and obtained nucleation rate in good agreement with the experimental data. Moreover, the method is applied to simulate an even more sluggish event: the nucleation of the B2 phase in a strong glass-forming Cu-Zr alloy. The nucleation rate was found to be 8 orders of magnitude smaller than Ni at the same undercooling, which well explains the good glass formability of the alloy. Thus, our work opens a new avenue to study solidification under realistic experimental conditions via atomistic computer simulation.

  7. Comparison of HELIX TWT Simulation Using 2-D PIC (Magic), 2-D Modal (Gator), and 1-D Modal (Christine) Methods

    DTIC Science & Technology

    1998-05-01

    Mission Research Corporation MRC/WDC-R-424 COMPARISON OF HELIX TWT SIMULATION USING 2-D PIC ( MAGIC ), 2-D MODAL (GATOR), AND 1-D MODAL (CHRISTINE...BRILLOUIN RUN 9 3.4 OUTLIER ELECTRON EFFECT IN GATOR 12 3.5 EMISSION CONDITION AND NONLAMINAR FLOW IN MAGIC 12 3.6 RADIAL SHEAR 13 SECTION 4. PPM B...Simulation using 2-D PIC ( MAGIC ), 2-D Modal (GATOR) and 1-D Modal (CHRISTINE) methods * D.N. Smithe(a), H. Freund(b), T. M. Antonsen Jr.,(b)’(c), E

  8. Numerical analysis of multicomponent responses of surface-hole transient electromagnetic method

    NASA Astrophysics Data System (ADS)

    Meng, Qing-Xin; Hu, Xiang-Yun; Pan, He-Ping; Zhou, Feng

    2017-03-01

    We calculate the multicomponent responses of surface-hole transient electromagnetic method. The methods and models are unsuitable as geoelectric models of conductive surrounding rocks because they are based on regular local targets. We also propose a calculation and analysis scheme based on numerical simulations of the subsurface transient electromagnetic fields. In the modeling of the electromagnetic fields, the forward modeling simulations are performed by using the finite-difference time-domain method and the discrete image method, which combines the Gaver-Stehfest inverse Laplace transform with the Prony method to solve the initial electromagnetic fields. The precision in the iterative computations is ensured by using the transmission boundary conditions. For the response analysis, we customize geoelectric models consisting of near-borehole targets and conductive wall rocks and implement forward modeling simulations. The observed electric fields are converted into induced electromotive force responses using multicomponent observation devices. By comparing the transient electric fields and multicomponent responses under different conditions, we suggest that the multicomponent-induced electromotive force responses are related to the horizontal and vertical gradient variations of the transient electric field at different times. The characteristics of the response are determined by the varying the subsurface transient electromagnetic fields, i.e., diffusion, attenuation and distortion, under different conditions as well as the electromagnetic fields at the observation positions. The calculation and analysis scheme of the response consider the surrounding rocks and the anomalous field of the local targets. It therefore can account for the geological data better than conventional transient field response analysis of local targets.

  9. Accelerating a Particle-in-Cell Simulation Using a Hybrid Counting Sort

    NASA Astrophysics Data System (ADS)

    Bowers, K. J.

    2001-11-01

    In this article, performance limitations of the particle advance in a particle-in-cell (PIC) simulation are discussed. It is shown that the memory subsystem and cache-thrashing severely limit the speed of such simulations. Methods to implement a PIC simulation under such conditions are explored. An algorithm based on a counting sort is developed which effectively eliminates PIC simulation cache thrashing. Sustained performance gains of 40 to 70 percent are measured on commodity workstations for a minimal 2d2v electrostatic PIC simulation. More complete simulations are expected to have even better results as larger simulations are usually even more memory subsystem limited.

  10. Simulation of spatially evolving turbulence and the applicability of Taylor's hypothesis in compressible flow

    NASA Technical Reports Server (NTRS)

    Lee, Sangsan; Lele, Sanjiva K.; Moin, Parviz

    1992-01-01

    For the numerical simulation of inhomogeneous turbulent flows, a method is developed for generating stochastic inflow boundary conditions with a prescribed power spectrum. Turbulence statistics from spatial simulations using this method with a low fluctuation Mach number are in excellent agreement with the experimental data, which validates the procedure. Turbulence statistics from spatial simulations are also compared to those from temporal simulations using Taylor's hypothesis. Statistics such as turbulence intensity, vorticity, and velocity derivative skewness compare favorably with the temporal simulation. However, the statistics of dilatation show a significant departure from those obtained in the temporal simulation. To directly check the applicability of Taylor's hypothesis, space-time correlations of fluctuations in velocity, vorticity, and dilatation are investigated. Convection velocities based on vorticity and velocity fluctuations are computed as functions of the spatial and temporal separations. The profile of the space-time correlation of dilatation fluctuations is explained via a wave propagation model.

  11. Stochastic simulation of karst conduit networks

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Dowd, Peter A.; Xu, Chaoshui; Durán-Valsero, Juan José

    2012-01-01

    Karst aquifers have very high spatial heterogeneity. Essentially, they comprise a system of pipes (i.e., the network of conduits) superimposed on rock porosity and on a network of stratigraphic surfaces and fractures. This heterogeneity strongly influences the hydraulic behavior of the karst and it must be reproduced in any realistic numerical model of the karst system that is used as input to flow and transport modeling. However, the directly observed karst conduits are only a small part of the complete karst conduit system and knowledge of the complete conduit geometry and topology remains spatially limited and uncertain. Thus, there is a special interest in the stochastic simulation of networks of conduits that can be combined with fracture and rock porosity models to provide a realistic numerical model of the karst system. Furthermore, the simulated model may be of interest per se and other uses could be envisaged. The purpose of this paper is to present an efficient method for conditional and non-conditional stochastic simulation of karst conduit networks. The method comprises two stages: generation of conduit geometry and generation of topology. The approach adopted is a combination of a resampling method for generating conduit geometries from templates and a modified diffusion-limited aggregation method for generating the network topology. The authors show that the 3D karst conduit networks generated by the proposed method are statistically similar to observed karst conduit networks or to a hypothesized network model. The statistical similarity is in the sense of reproducing the tortuosity index of conduits, the fractal dimension of the network, the direction rose of directions, the Z-histogram and Ripley's K-function of the bifurcation points (which differs from a random allocation of those bifurcation points). The proposed method (1) is very flexible, (2) incorporates any experimental data (conditioning information) and (3) can easily be modified when implemented in a hydraulic inverse modeling procedure. Several synthetic examples are given to illustrate the methodology and real conduit network data are used to generate simulated networks that mimic real geometries and topology.

  12. Characterization and Simulation of the Thermoacoustic Instability Behavior of an Advanced, Low Emissions Combustor Prototype

    NASA Technical Reports Server (NTRS)

    DeLaat, John C.; Paxson, Daniel E.

    2008-01-01

    Extensive research is being done toward the development of ultra-low-emissions combustors for aircraft gas turbine engines. However, these combustors have an increased susceptibility to thermoacoustic instabilities. This type of instability was recently observed in an advanced, low emissions combustor prototype installed in a NASA Glenn Research Center test stand. The instability produces pressure oscillations that grow with increasing fuel/air ratio, preventing full power operation. The instability behavior makes the combustor a potentially useful test bed for research into active control methods for combustion instability suppression. The instability behavior was characterized by operating the combustor at various pressures, temperatures, and fuel and air flows representative of operation within an aircraft gas turbine engine. Trends in instability behavior versus operating condition have been identified and documented, and possible explanations for the trends provided. A simulation developed at NASA Glenn captures the observed instability behavior. The physics-based simulation includes the relevant physical features of the combustor and test rig, employs a Sectored 1-D approach, includes simplified reaction equations, and provides time-accurate results. A computationally efficient method is used for area transitions, which decreases run times and allows the simulation to be used for parametric studies, including control method investigations. Simulation results show that the simulation exhibits a self-starting, self-sustained combustion instability and also replicates the experimentally observed instability trends versus operating condition. Future plans are to use the simulation to investigate active control strategies to suppress combustion instabilities and then to experimentally demonstrate active instability suppression with the low emissions combustor prototype, enabling full power, stable operation.

  13. Characterization and Simulation of Thermoacoustic Instability in a Low Emissions Combustor Prototype

    NASA Technical Reports Server (NTRS)

    DeLaat, John C.; Paxson, Daniel E.

    2008-01-01

    Extensive research is being done toward the development of ultra-low-emissions combustors for aircraft gas turbine engines. However, these combustors have an increased susceptibility to thermoacoustic instabilities. This type of instability was recently observed in an advanced, low emissions combustor prototype installed in a NASA Glenn Research Center test stand. The instability produces pressure oscillations that grow with increasing fuel/air ratio, preventing full power operation. The instability behavior makes the combustor a potentially useful test bed for research into active control methods for combustion instability suppression. The instability behavior was characterized by operating the combustor at various pressures, temperatures, and fuel and air flows representative of operation within an aircraft gas turbine engine. Trends in instability behavior vs. operating condition have been identified and documented. A simulation developed at NASA Glenn captures the observed instability behavior. The physics-based simulation includes the relevant physical features of the combustor and test rig, employs a Sectored 1-D approach, includes simplified reaction equations, and provides time-accurate results. A computationally efficient method is used for area transitions, which decreases run times and allows the simulation to be used for parametric studies, including control method investigations. Simulation results show that the simulation exhibits a self-starting, self-sustained combustion instability and also replicates the experimentally observed instability trends vs. operating condition. Future plans are to use the simulation to investigate active control strategies to suppress combustion instabilities and then to experimentally demonstrate active instability suppression with the low emissions combustor prototype, enabling full power, stable operation.

  14. Method for simulating atmospheric turbulence phase effects for multiple time slices and anisoplanatic conditions.

    PubMed

    Roggemann, M C; Welsh, B M; Montera, D; Rhoadarmer, T A

    1995-07-10

    Simulating the effects of atmospheric turbulence on optical imaging systems is an important aspect of understanding the performance of these systems. Simulations are particularly important for understanding the statistics of some adaptive-optics system performance measures, such as the mean and variance of the compensated optical transfer function, and for understanding the statistics of estimators used to reconstruct intensity distributions from turbulence-corrupted image measurements. Current methods of simulating the performance of these systems typically make use of random phase screens placed in the system pupil. Methods exist for making random draws of phase screens that have the correct spatial statistics. However, simulating temporal effects and anisoplanatism requires one or more phase screens at different distances from the aperture, possibly moving with different velocities. We describe and demonstrate a method for creating random draws of phase screens with the correct space-time statistics for a bitrary turbulence and wind-velocity profiles, which can be placed in the telescope pupil in simulations. Results are provided for both the von Kármán and the Kolmogorov turbulence spectra. We also show how to simulate anisoplanatic effects with this technique.

  15. Pilot Study on the Applicability of Variance Reduction Techniques to the Simulation of a Stochastic Combat Model

    DTIC Science & Technology

    1987-09-01

    inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in

  16. Oil viscosity limitation on dispersibility of crude oil under simulated at-sea conditions in a large wave tank.

    PubMed

    Trudel, Ken; Belore, Randy C; Mullin, Joseph V; Guarino, Alan

    2010-09-01

    This study determined the limiting oil viscosity for chemical dispersion of oil spills under simulated sea conditions in the large outdoor wave tank at the US National Oil Spill Response Test Facility in New Jersey. Dispersant effectiveness tests were completed using crude oils with viscosities ranging from 67 to 40,100 cP at test temperature. Tests produced an effectiveness-viscosity curve with three phases when oil was treated with Corexit 9500 at a dispersant-to-oil ratio of 1:20. The oil viscosity that limited chemical dispersion under simulated at-sea conditions was in the range of 18,690 cP to 33,400 cP. Visual observations and measurements of oil concentrations and droplet size distributions in the water under treated and control slicks correlated well with direct measurements of effectiveness. The dispersant effectiveness versus oil viscosity relationship under simulated at sea conditions at Ohmsett was most similar to those from similar tests made using the Institut Francais du Pétrole and Exxon Dispersant Effectiveness (EXDET) test methods. Copyright 2010 Elsevier Ltd. All rights reserved.

  17. Geostatistics: a new tool for describing spatially-varied surface conditions from timber harvested and burned hillslopes

    Treesearch

    Peter R. Robichaud

    1997-01-01

    Geostatistics provides a method to describe the spatial continuity of many natural phenomena. Spatial models are based upon the concept of scaling, kriging and conditional simulation. These techniques were used to describe the spatially-varied surface conditions on timber harvest and burned hillslopes. Geostatistical techniques provided estimates of the ground cover (...

  18. Probabilistic Approach to Conditional Probability of Release of Hazardous Materials from Railroad Tank Cars during Accidents

    DOT National Transportation Integrated Search

    2009-10-13

    This paper describes a probabilistic approach to estimate the conditional probability of release of hazardous materials from railroad tank cars during train accidents. Monte Carlo methods are used in developing a probabilistic model to simulate head ...

  19. Science Based Human Reliability Analysis: Using Digital Nuclear Power Plant Simulators for Human Reliability Research

    NASA Astrophysics Data System (ADS)

    Shirley, Rachel Elizabeth

    Nuclear power plant (NPP) simulators are proliferating in academic research institutions and national laboratories in response to the availability of affordable, digital simulator platforms. Accompanying the new research facilities is a renewed interest in using data collected in NPP simulators for Human Reliability Analysis (HRA) research. An experiment conducted in The Ohio State University (OSU) NPP Simulator Facility develops data collection methods and analytical tools to improve use of simulator data in HRA. In the pilot experiment, student operators respond to design basis accidents in the OSU NPP Simulator Facility. Thirty-three undergraduate and graduate engineering students participated in the research. Following each accident scenario, student operators completed a survey about perceived simulator biases and watched a video of the scenario. During the video, they periodically recorded their perceived strength of significant Performance Shaping Factors (PSFs) such as Stress. This dissertation reviews three aspects of simulator-based research using the data collected in the OSU NPP Simulator Facility: First, a qualitative comparison of student operator performance to computer simulations of expected operator performance generated by the Information Decision Action Crew (IDAC) HRA method. Areas of comparison include procedure steps, timing of operator actions, and PSFs. Second, development of a quantitative model of the simulator bias introduced by the simulator environment. Two types of bias are defined: Environmental Bias and Motivational Bias. This research examines Motivational Bias--that is, the effect of the simulator environment on an operator's motivations, goals, and priorities. A bias causal map is introduced to model motivational bias interactions in the OSU experiment. Data collected in the OSU NPP Simulator Facility are analyzed using Structural Equation Modeling (SEM). Data include crew characteristics, operator surveys, and time to recognize and diagnose the accident in the scenario. These models estimate how the effects of the scenario conditions are mediated by simulator bias, and demonstrate how to quantify the strength of the simulator bias. Third, development of a quantitative model of subjective PSFs based on objective data (plant parameters, alarms, etc.) and PSF values reported by student operators. The objective PSF model is based on the PSF network in the IDAC HRA method. The final model is a mixed effects Bayesian hierarchical linear regression model. The subjective PSF model includes three factors: The Environmental PSF, the simulator Bias, and the Context. The Environmental Bias is mediated by an operator sensitivity coefficient that captures the variation in operator reactions to plant conditions. The data collected in the pilot experiments are not expected to reflect professional NPP operator performance, because the students are still novice operators. However, the models used in this research and the methods developed to analyze them demonstrate how to consider simulator bias in experiment design and how to use simulator data to enhance the technical basis of a complex HRA method. The contributions of the research include a framework for discussing simulator bias, a quantitative method for estimating simulator bias, a method for obtaining operator-reported PSF values, and a quantitative method for incorporating the variability in operator perception into PSF models. The research demonstrates applications of Structural Equation Modeling and hierarchical Bayesian linear regression models in HRA. Finally, the research demonstrates the benefits of using student operators as a test platform for HRA research.

  20. Fast image-based mitral valve simulation from individualized geometry.

    PubMed

    Villard, Pierre-Frederic; Hammer, Peter E; Perrin, Douglas P; Del Nido, Pedro J; Howe, Robert D

    2018-04-01

    Common surgical procedures on the mitral valve of the heart include modifications to the chordae tendineae. Such interventions are used when there is extensive leaflet prolapse caused by chordae rupture or elongation. Understanding the role of individual chordae tendineae before operating could be helpful to predict whether the mitral valve will be competent at peak systole. Biomechanical modelling and simulation can achieve this goal. We present a method to semi-automatically build a computational model of a mitral valve from micro CT (computed tomography) scans: after manually picking chordae fiducial points, the leaflets are segmented and the boundary conditions as well as the loading conditions are automatically defined. Fast finite element method (FEM) simulation is carried out using Simulation Open Framework Architecture (SOFA) to reproduce leaflet closure at peak systole. We develop three metrics to evaluate simulation results: (i) point-to-surface error with the ground truth reference extracted from the CT image, (ii) coaptation surface area of the leaflets and (iii) an indication of whether the simulated closed leaflets leak. We validate our method on three explanted porcine hearts and show that our model predicts the closed valve surface with point-to-surface error of approximately 1 mm, a reasonable coaptation surface area, and absence of any leak at peak systole (maximum closed pressure). We also evaluate the sensitivity of our model to changes in various parameters (tissue elasticity, mesh accuracy, and the transformation matrix used for CT scan registration). We also measure the influence of the positions of the chordae tendineae on simulation results and show that marginal chordae have a greater influence on the final shape than intermediate chordae. The mitral valve simulation can help the surgeon understand valve behaviour and anticipate the outcome of a procedure. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    PubMed

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  2. A hybrid method of estimating pulsating flow parameters in the space-time domain

    NASA Astrophysics Data System (ADS)

    Pałczyński, Tomasz

    2017-05-01

    This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.

  3. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  4. DSMC Simulations of Blunt Body Flows for Mars Entries: Mars Pathfinder and Mars Microprobe Capsules

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Wilmoth, Richard G.; Price, Joseph M.

    1997-01-01

    The hypersonic transitional flow aerodynamics of the Mars Pathfinder and Mars Microprobe capsules are simulated with the direct simulation Monte Carlo method. Calculations of axial, normal, and static pitching coefficients were obtained over an angle of attack range comparable to actual flight requirements. Comparisons are made with modified Newtonian and free-molecular-flow calculations. Aerothermal results were also obtained for zero incidence entry conditions.

  5. Promoting Systems Thinking through Biology Lessons

    NASA Astrophysics Data System (ADS)

    Riess, Werner; Mischo, Christoph

    2010-04-01

    This study's goal was to analyze various teaching approaches within the context of natural science lessons, especially in biology. The main focus of the paper lies on the effectiveness of different teaching methods in promoting systems thinking in the field of Education for Sustainable Development. The following methods were incorporated into the study: special lessons designed to promote systems thinking, a computer-simulated scenario on the topic "ecosystem forest," and a combination of both special lessons and the computer simulation. These groups were then compared to a control group. A questionnaire was used to assess systems thinking skills of 424 sixth-grade students of secondary schools in Germany. The assessment differentiated between a conceptual understanding (measured as achievement score) and a reflexive justification (measured as justification score) of systems thinking. The following control variables were used: logical thinking, grades in school, memory span, and motivational goal orientation. Based on the pretest-posttest control group design, only those students who received both special instruction and worked with the computer simulation showed a significant increase in their achievement scores. The justification score increased in the computer simulation condition as well as in the combination of computer simulation and lesson condition. The possibilities and limits of promoting various forms of systems thinking by using realistic computer simulations are discussed.

  6. Simulation of magnetic particles in microfluidic channels

    NASA Astrophysics Data System (ADS)

    Gusenbauer, Markus; Schrefl, Thomas

    2018-01-01

    In the field of biomedicine the applications of magnetic beads have increased immensely in the last decade. Drug delivery, magnetic resonance imaging, bioseparation or hyperthermia are only a small excerpt of their usage. Starting from microscaled particles the research is focusing more and more on nanoscaled particles. We are investigating and validating a method for simulating magnetic beads in a microfluidic flow which will help to manipulate beads in a controlled and reproducible manner. We are using the soft-matter simulation package ESPResSo to simulate magnetic particle dynamics in a lattice Boltzmann flow and applied external magnetic fields. Laminar as well as turbulent flow conditions in microfluidic systems can be analyzed while particles tend to agglomerate due to magnetic interactions. The proposed simulation methods are validated with experiments from literature.

  7. A fast exact simulation method for a class of Markov jump processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yao, E-mail: yaoli@math.umass.edu; Hu, Lili, E-mail: lilyhu86@gmail.com

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze itsmore » properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.« less

  8. 3-D conditional hyperbolic method of moments for high-fidelity Euler-Euler simulations of particle-laden flows

    NASA Astrophysics Data System (ADS)

    Patel, Ravi; Kong, Bo; Capecelatro, Jesse; Fox, Rodney; Desjardins, Olivier

    2017-11-01

    Particle-laden turbulent flows are important features of many environmental and industrial processes. Euler-Euler (EE) simulations of these flows are more computationally efficient than Euler-Lagrange (EL) simulations. However, traditional EE methods, such as the two-fluid model, cannot faithfully capture dilute regions of flow with finite Stokes number particles. For this purpose, the multi-valued nature of the particle velocity field must be treated with a polykinetic description. Various quadrature-based moment methods (QBMM) can be used to approximate the full kinetic description by solving for a set of moments of the particle velocity distribution function (VDF) and providing closures for the higher-order moments. Early QBMM fail to maintain the strict hyperbolicity of the kinetic equations, producing unphysical delta shocks (i.e., mass accumulation at a point). In previous work, a 2-D conditional hyperbolic quadrature method of moments (CHyQMOM) was proposed as a fourth-order QBMM closure that maintains strict hyperbolicity. Here, we present the 3-D extension of CHyQMOM. We compare results from CHyQMOM to other QBMM and EL in the context of particle trajectory crossing, cluster-induced turbulence, and particle-laden channel flow. NSF CBET-1437903.

  9. Simulation of confined magnetohydrodynamic flows with Dirichlet boundary conditions using a pseudo-spectral method with volume penalization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morales, Jorge A.; Leroy, Matthieu; Bos, Wouter J.T.

    A volume penalization approach to simulate magnetohydrodynamic (MHD) flows in confined domains is presented. Here the incompressible visco-resistive MHD equations are solved using parallel pseudo-spectral solvers in Cartesian geometries. The volume penalization technique is an immersed boundary method which is characterized by a high flexibility for the geometry of the considered flow. In the present case, it allows to use other than periodic boundary conditions in a Fourier pseudo-spectral approach. The numerical method is validated and its convergence is assessed for two- and three-dimensional hydrodynamic (HD) and MHD flows, by comparing the numerical results with results from literature and analyticalmore » solutions. The test cases considered are two-dimensional Taylor–Couette flow, the z-pinch configuration, three dimensional Orszag–Tang flow, Ohmic-decay in a periodic cylinder, three-dimensional Taylor–Couette flow with and without axial magnetic field and three-dimensional Hartmann-instabilities in a cylinder with an imposed helical magnetic field. Finally, we present a magnetohydrodynamic flow simulation in toroidal geometry with non-symmetric cross section and imposing a helical magnetic field to illustrate the potential of the method.« less

  10. Formulation and Implementation of Inflow/Outflow Boundary Conditions to Simulate Propulsive Effects

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L.; Aftosmis, Michael J.; Nemec, Marian

    2018-01-01

    Boundary conditions appropriate for simulating flow entering or exiting the computational domain to mimic propulsion effects have been implemented in an adaptive Cartesian simulation package. A robust iterative algorithm to control mass flow rate through an outflow boundary surface is presented, along with a formulation to explicitly specify mass flow rate through an inflow boundary surface. The boundary conditions have been applied within a mesh adaptation framework based on the method of adjoint-weighted residuals. This allows for proper adaptive mesh refinement when modeling propulsion systems. The new boundary conditions are demonstrated on several notional propulsion systems operating in flow regimes ranging from low subsonic to hypersonic. The examples show that the prescribed boundary state is more properly imposed as the mesh is refined. The mass-flowrate steering algorithm is shown to be an efficient approach in each example. To demonstrate the boundary conditions on a realistic complex aircraft geometry, two of the new boundary conditions are also applied to a modern low-boom supersonic demonstrator design with multiple flow inlets and outlets.

  11. Data fusion of multi-scale representations for structural damage detection

    NASA Astrophysics Data System (ADS)

    Guo, Tian; Xu, Zili

    2018-01-01

    Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.

  12. The Seepage Simulation of Single Hole and Composite Gas Drainage Based on LB Method

    NASA Astrophysics Data System (ADS)

    Chen, Yanhao; Zhong, Qiu; Gong, Zhenzhao

    2018-01-01

    Gas drainage is the most effective method to prevent and solve coal mine gas power disasters. It is very important to study the seepage flow law of gas in fissure coal gas. The LB method is a simplified computational model based on micro-scale, especially for the study of seepage problem. Based on fracture seepage mathematical model on the basis of single coal gas drainage, using the LB method during coal gas drainage of gas flow numerical simulation, this paper maps the single-hole drainage gas, symmetric slot and asymmetric slot, the different width of the slot combined drainage area gas flow under working condition of gas cloud of gas pressure, flow path diagram and flow velocity vector diagram, and analyses the influence on gas seepage field under various working conditions, and also discusses effective drainage method of the center hole slot on both sides, and preliminary exploration that is related to the combination of gas drainage has been carried on as well.

  13. Patient-individualized boundary conditions for CFD simulations using time-resolved 3D angiography.

    PubMed

    Boegel, Marco; Gehrisch, Sonja; Redel, Thomas; Rohkohl, Christopher; Hoelter, Philip; Doerfler, Arnd; Maier, Andreas; Kowarschik, Markus

    2016-06-01

    Hemodynamic simulations are of increasing interest for the assessment of aneurysmal rupture risk and treatment planning. Achievement of accurate simulation results requires the usage of several patient-individual boundary conditions, such as a geometric model of the vasculature but also individualized inflow conditions. We propose the automatic estimation of various parameters for boundary conditions for computational fluid dynamics (CFD) based on a single 3D rotational angiography scan, also showing contrast agent inflow. First the data are reconstructed, and a patient-specific vessel model can be generated in the usual way. For this work, we optimize the inflow waveform based on two parameters, the mean velocity and pulsatility. We use statistical analysis of the measurable velocity distribution in the vessel segment to estimate the mean velocity. An iterative optimization scheme based on CFD and virtual angiography is utilized to estimate the inflow pulsatility. Furthermore, we present methods to automatically determine the heart rate and synchronize the inflow waveform to the patient's heart beat, based on time-intensity curves extracted from the rotational angiogram. This will result in a patient-individualized inflow velocity curve. The proposed methods were evaluated on two clinical datasets. Based on the vascular geometries, synthetic rotational angiography data was generated to allow a quantitative validation of our approach against ground truth data. We observed an average error of approximately [Formula: see text] for the mean velocity, [Formula: see text] for the pulsatility. The heart rate was estimated very precisely with an average error of about [Formula: see text], which corresponds to about 6 ms error for the duration of one cardiac cycle. Furthermore, a qualitative comparison of measured time-intensity curves from the real data and patient-specific simulated ones shows an excellent match. The presented methods have the potential to accurately estimate patient-specific boundary conditions from a single dedicated rotational scan.

  14. Pre-compression volume on flow ripple reduction of a piston pump

    NASA Astrophysics Data System (ADS)

    Xu, Bing; Song, Yuechao; Yang, Huayong

    2013-11-01

    Axial piston pump with pre-compression volume(PCV) has lower flow ripple in large scale of operating condition than the traditional one. However, there is lack of precise simulation model of the axial piston pump with PCV, so the parameters of PCV are difficult to be determined. A finite element simulation model for piston pump with PCV is built by considering the piston movement, the fluid characteristic(including fluid compressibility and viscosity) and the leakage flow rate. Then a test of the pump flow ripple called the secondary source method is implemented to validate the simulation model. Thirdly, by comparing results among the simulation results, test results and results from other publications at the same operating condition, the simulation model is validated and used in optimizing the axial piston pump with PCV. According to the pump flow ripples obtained by the simulation model with different PCV parameters, the flow ripple is the smallest when the PCV angle is 13°, the PCV volume is 1.3×10-4 m3 at such operating condition that the pump suction pressure is 2 MPa, the pump delivery pressure 15 MPa, the pump speed 1 000 r/min, the swash plate angle 13°. At the same time, the flow ripple can be reduced when the pump suction pressure is 2 MPa, the pump delivery pressure is 5 MPa,15 MPa, 22 MPa, pump speed is 400 r/min, 1 000 r/min, 1 500 r/min, the swash plate angle is 11°, 13°, 15° and 17°, respectively. The finite element simulation model proposed provides a method for optimizing the PCV structure and guiding for designing a quieter axial piston pump.

  15. Optimal Measurement Conditions for Spatiotemporal EEG/MEG Source Analysis.

    ERIC Educational Resources Information Center

    Huizenga, Hilde M.; Heslenfeld, Dirk J.; Molenaar, Peter C. M.

    2002-01-01

    Developed a method to determine the required number and position of sensors for human brain electromagnetic source analysis. Studied the method through a simulation study and an empirical study on visual evoked potentials in one adult male. Results indicate the method is fast and reliable and improves source precision. (SLD)

  16. A strategy to couple the material point method (MPM) and smoothed particle hydrodynamics (SPH) computational techniques

    NASA Astrophysics Data System (ADS)

    Raymond, Samuel J.; Jones, Bruce; Williams, John R.

    2018-01-01

    A strategy is introduced to allow coupling of the material point method (MPM) and smoothed particle hydrodynamics (SPH) for numerical simulations. This new strategy partitions the domain into SPH and MPM regions, particles carry all state variables and as such no special treatment is required for the transition between regions. The aim of this work is to derive and validate the coupling methodology between MPM and SPH. Such coupling allows for general boundary conditions to be used in an SPH simulation without further augmentation. Additionally, as SPH is a purely particle method, and MPM is a combination of particles and a mesh. This coupling also permits a smooth transition from particle methods to mesh methods, where further coupling to mesh methods could in future provide an effective farfield boundary treatment for the SPH method. The coupling technique is introduced and described alongside a number of simulations in 1D and 2D to validate and contextualize the potential of using these two methods in a single simulation. The strategy shown here is capable of fully coupling the two methods without any complicated algorithms to transform information from one method to another.

  17. On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; LeMaster, Daniel A.

    2017-05-01

    We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.

  18. New parsimonious simulation methods and tools to assess future food and environmental security of farm populations

    PubMed Central

    Antle, John M.; Stoorvogel, Jetse J.; Valdivia, Roberto O.

    2014-01-01

    This article presents conceptual and empirical foundations for new parsimonious simulation models that are being used to assess future food and environmental security of farm populations. The conceptual framework integrates key features of the biophysical and economic processes on which the farming systems are based. The approach represents a methodological advance by coupling important behavioural processes, for example, self-selection in adaptive responses to technological and environmental change, with aggregate processes, such as changes in market supply and demand conditions or environmental conditions as climate. Suitable biophysical and economic data are a critical limiting factor in modelling these complex systems, particularly for the characterization of out-of-sample counterfactuals in ex ante analyses. Parsimonious, population-based simulation methods are described that exploit available observational, experimental, modelled and expert data. The analysis makes use of a new scenario design concept called representative agricultural pathways. A case study illustrates how these methods can be used to assess food and environmental security. The concluding section addresses generalizations of parametric forms and linkages of regional models to global models. PMID:24535388

  19. New parsimonious simulation methods and tools to assess future food and environmental security of farm populations.

    PubMed

    Antle, John M; Stoorvogel, Jetse J; Valdivia, Roberto O

    2014-04-05

    This article presents conceptual and empirical foundations for new parsimonious simulation models that are being used to assess future food and environmental security of farm populations. The conceptual framework integrates key features of the biophysical and economic processes on which the farming systems are based. The approach represents a methodological advance by coupling important behavioural processes, for example, self-selection in adaptive responses to technological and environmental change, with aggregate processes, such as changes in market supply and demand conditions or environmental conditions as climate. Suitable biophysical and economic data are a critical limiting factor in modelling these complex systems, particularly for the characterization of out-of-sample counterfactuals in ex ante analyses. Parsimonious, population-based simulation methods are described that exploit available observational, experimental, modelled and expert data. The analysis makes use of a new scenario design concept called representative agricultural pathways. A case study illustrates how these methods can be used to assess food and environmental security. The concluding section addresses generalizations of parametric forms and linkages of regional models to global models.

  20. An immersed boundary-simplified sphere function-based gas kinetic scheme for simulation of 3D incompressible flows

    NASA Astrophysics Data System (ADS)

    Yang, L. M.; Shu, C.; Yang, W. M.; Wang, Y.; Wu, J.

    2017-08-01

    In this work, an immersed boundary-simplified sphere function-based gas kinetic scheme (SGKS) is presented for the simulation of 3D incompressible flows with curved and moving boundaries. At first, the SGKS [Yang et al., "A three-dimensional explicit sphere function-based gas-kinetic flux solver for simulation of inviscid compressible flows," J. Comput. Phys. 295, 322 (2015) and Yang et al., "Development of discrete gas kinetic scheme for simulation of 3D viscous incompressible and compressible flows," J. Comput. Phys. 319, 129 (2016)], which is often applied for the simulation of compressible flows, is simplified to improve the computational efficiency for the simulation of incompressible flows. In the original SGKS, the integral domain along the spherical surface for computing conservative variables and numerical fluxes is usually not symmetric at the cell interface. This leads the expression of numerical fluxes at the cell interface to be relatively complicated. For incompressible flows, the sphere at the cell interface can be approximately considered to be symmetric as shown in this work. Besides that, the energy equation is usually not needed for the simulation of incompressible isothermal flows. With all these simplifications, the simple and explicit formulations for the conservative variables and numerical fluxes at the cell interface can be obtained. Second, to effectively implement the no-slip boundary condition for fluid flow problems with complex geometry as well as moving boundary, the implicit boundary condition-enforced immersed boundary method [Wu and Shu, "Implicit velocity correction-based immersed boundary-lattice Boltzmann method and its applications," J. Comput. Phys. 228, 1963 (2009)] is introduced into the simplified SGKS. That is, the flow field is solved by the simplified SGKS without considering the presence of an immersed body and the no-slip boundary condition is implemented by the immersed boundary method. The accuracy and efficiency of the present scheme are validated by simulating the decaying vortex flow, flow past a stationary and rotating sphere, flow past a stationary torus, and flows over dragonfly flight.

  1. A Perturbation Analysis of Harmonics Generation from Saturated Elements in Power Systems

    NASA Astrophysics Data System (ADS)

    Kumano, Teruhisa

    Nonlinear phenomena such as saturation in magnetic flux give considerable effects in power system analysis. It is reported that a failure in a real 500kV system triggered islanding operation, where resultant even harmonics caused malfunctions in protective relays. It is also reported that the major origin of this wave distortion is nothing but unidirectional magnetization of the transformer iron core. Time simulation is widely used today to analyze this type of phenomena, but it has basically two shortcomings. One is that the time simulation takes two much computing time in the vicinity of inflection points in the saturation characteristic curve because certain iterative procedure such as N-R (Newton-Raphson) should be used and such methods tend to be caught in an ill conditioned numerical hunting. The other is that such simulation methods sometimes do not help intuitive understanding of the studied phenomenon because the whole nonlinear equations are treated in a matrix form and not properly divided into understandable parts as done in linear systems. This paper proposes a new computation scheme which is based on so called perturbation method. Magnetic saturation in iron cores in a generator and a transformer are taken into account. The proposed method has a special feature against the first shortcoming of the N-R based time simulation method stated above. In the proposed method no iterative process is used to reduce the equation residue but uses perturbation series, which means free from the ill condition problem. Users have only to calculate each perturbation terms one by one until he reaches necessary accuracy. In a numerical example treated in the present paper the first order perturbation can make reasonably high accuracy, which means very fast computing. In numerical study three nonlinear elements are considered. Calculated results are almost identical to the conventional Newton-Raphson based time simulation, which shows the validity of the method. The proposed method would be effectively used in a screening where many case studies are needed.

  2. Comparison of the methane production potential and biodegradability of kitchen waste from different sources under mesophilic and thermophilic conditions.

    PubMed

    Yang, Ziyi; Wang, Wen; Zhang, Shuyu; Ma, Zonghu; Anwar, Naveed; Liu, Guangqing; Zhang, Ruihong

    2017-04-01

    The methane production potential of kitchen waste (KW) obtained from different sources was compared through mesophilic and thermophilic anaerobic digestion. The methane yields (MYs) obtained with the same KW sample under different temperatures were similar, whereas the MYs obtained with different samples differed significantly. The highest MY obtained in S7 was 54%-60% higher than the lowest MY in S3. The modified Gompertz model was utilized to simulate the methane production process. The maximum production rate of methane under thermophilic conditions was 2%-86% higher than that under mesophilic conditions. The characteristics of different KW samples were studied. In the distribution of total chemical oxygen demand, the diversity of organic compounds of KW was the most dominant factor that affected the potential MYs of KW. The effect of the C/N and C/P ratios or the concentration of metal ions was insignificant. Two typical methods to calculate the theoretical MY (TMY) were compared, the organic composition method can simulate methane production more precisely than the elemental analysis method. Significant linear correlations were found between TMY org and MYs under mesophilic and thermophilic conditions. The organic composition method can thus be utilized as a fast technique to predict the methane production potential of KW.

  3. Addressing multi-use issues in sustainable forest management with signal-transfer modeling

    Treesearch

    Robert J. Luxmoore; William W. Hargrove; M. Lynn Tharp; W. Mac Post; Michael W. Berry; Karen S. Minser; Wendell P. Cropper; Dale W. Johnson; Boris Zeide; Ralph L. Amateis; Harold E. Burkhart; V. Clark Baldwin; Kelly D. Peterson

    2002-01-01

    Management decisions concerning impacts of projected changes in environmental and social conditions on multi-use forest products and services, such as productivity, water supply or carbon sequestration, may be facilitated with signal-transfer modeling. This simulation method utilizes a hierarchy of simulators in which the integrated responses (signals) from smaller-...

  4. A method for ensemble wildland fire simulation

    Treesearch

    Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain

    2011-01-01

    An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...

  5. A finite-element model for simulation of two-dimensional steady-state ground-water flow in confined aquifers

    USGS Publications Warehouse

    Kuniansky, E.L.

    1990-01-01

    A computer program based on the Galerkin finite-element method was developed to simulate two-dimensional steady-state ground-water flow in either isotropic or anisotropic confined aquifers. The program may also be used for unconfined aquifers of constant saturated thickness. Constant head, constant flux, and head-dependent flux boundary conditions can be specified in order to approximate a variety of natural conditions, such as a river or lake boundary, and pumping well. The computer program was developed for the preliminary simulation of ground-water flow in the Edwards-Trinity Regional aquifer system as part of the Regional Aquifer-Systems Analysis Program. Results of the program compare well to analytical solutions and simulations .from published finite-difference models. A concise discussion of the Galerkin method is presented along with a description of the program. Provided in the Supplemental Data section are a listing of the computer program, definitions of selected program variables, and several examples of data input and output used in verifying the accuracy of the program.

  6. 10 CFR 431.173 - Requirements applicable to all manufacturers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... COMMERCIAL AND INDUSTRIAL EQUIPMENT Provisions for Commercial Heating, Ventilating, Air-Conditioning and... is based on engineering or statistical analysis, computer simulation or modeling, or other analytic... method or methods used; (B) The mathematical model, the engineering or statistical analysis, computer...

  7. Enhanced Sampling of Molecular Dynamics Simulations of a Polyalanine Octapeptide: Effects of the Periodic Boundary Conditions on Peptide Conformation.

    PubMed

    Kasahara, Kota; Sakuraba, Shun; Fukuda, Ikuo

    2018-03-08

    We investigate the problem of artifacts caused by the periodic boundary conditions (PBC) used in molecular simulation studies. Despite the long history of simulations with PBCs, the existence of measurable artifacts originating from PBCs applied to inherently nonperiodic physical systems remains controversial. Specifically, these artifacts appear as differences between simulations of the same system but with different simulation-cell sizes. Earlier studies have implied that, even in the simple case of a small model peptide in water, sampling inefficiency is a major obstacle to understanding these artifacts. In this study, we have resolved the sampling issue using the replica exchange molecular dynamics (REMD) enhanced-sampling method to explore PBC artifacts. Explicitly solvated zwitterionic polyalanine octapeptides with three different cubic-cells, having dimensions of L = 30, 40, and 50 Å, were investigated to elucidate the differences with 64 replica × 500 ns REMD simulations using the AMBER parm99SB force field. The differences among them were not large overall, and the results for the L = 30 and 40 Å simulations in the conformational free energy landscape were found to be very similar at room temperature. However, a small but statistically significant difference was seen for L = 50 Å. We observed that extended conformations were slightly overstabilized in the smaller systems. The origin of these artifacts is discussed by comparison to an electrostatic calculation method without PBCs.

  8. Numerical simulation of supersonic water vapor jet impinging on a flat plate

    NASA Astrophysics Data System (ADS)

    Kuzuu, Kazuto; Aono, Junya; Shima, Eiji

    2012-11-01

    We investigated supersonic water vapor jet impinging on a flat plate through numerical simulation. This simulation is for estimating heating effect of a reusable sounding rocket during vertical landing. The jet from the rocket bottom is supersonic, M=2 to 3, high temperature, T=2000K, and over-expanded. Atmospheric condition is a stationary standard air. The simulation is base on the full Navier-Stokes equations, and the flow is numerically solved by an unstructured compressible flow solver, in-house code LS-FLOW-RG. In this solver, the transport properties of muti-species gas and mass conservation equations of those species are considered. We employed DDES method as a turbulence model. For verification and validation, we also carried out a simulation under the condition of air, and compared with the experimental data. Agreement between our results and the experimental data are satisfactory. Through this simulation, we calculated the flow under some exit pressure conditions, and discuss the effects of pressure ratio on flow structures, heat transfer and so on. Furthermore, we also investigated diffusion effects of water vapor, and we confirmed that these phenomena are generated by the interaction of atmospheric air and affects the heat transfer to the surrounding environment.

  9. A continuum treatment of sliding in Eulerian simulations of solid-solid and solid-fluid interfaces

    NASA Astrophysics Data System (ADS)

    Subramaniam, Akshay; Ghaisas, Niranjan; Lele, Sanjiva

    2017-11-01

    A novel treatment of sliding is developed for use in an Eulerian framework for simulating elastic-plastic deformations of solids coupled with fluids. In this method, embedded interfacial boundary conditions for perfect sliding are imposed by enforcing the interface normal to be a principal direction of the Cauchy stress and appropriate consistency conditions ensure correct transmission and reflection of waves at the interface. This sliding treatment may be used either to simulate a solid-solid sliding interface or to incorporate an internal slip boundary condition at a solid-fluid interface. Sliding laws like the Coulomb friction law can also be incorporated with relative ease into this framework. Simulations of sliding interfaces are conducted using a 10th order compact finite difference scheme and a Localized Artificial Diffusivity (LAD) scheme for shock and interface capturing. 1D and 2D simulations are used to assess the accuracy of the sliding treatment. The Richmyer-Meshkov instability between copper and aluminum is simulated with this sliding treatment as a demonstration test case. Support for this work was provided through Grant B612155 from the Lawrence Livermore National Laboratory, US Department of Energy.

  10. The simulation of microgravity conditions on the ground.

    PubMed

    Albrecht-Buehler, G

    1992-10-01

    This chapter defines weightlessness as the condition where the acceleration of an object is independent of its mass. Applying this definition to the clinostat, it argues that the clinostat is very limited as a simulator of microgravity because it (a) generates centrifugal forces, (b) generates particle oscillations with mass-dependent amplitudes of speed and phase shifts relative to the clinorotation, (c) is unable to remove globally the scalar effects of gravity such as hydrostatic pressure, which are independent of the direction of gravity in the first place, and, (d) generates more convective mixing of the gaseous or liquid environment of the test object, rather than eliminating it, as would true weightlessness. It is proposed that attempts to simulate microgravity must accept the simulation of one aspect of microgravity at a time, and urges that the suppression of convective currents be a major feature of experimental methods that simulate microgravity.

  11. Parallel Transport with Sheath and Collisional Effects in Global Electrostatic Turbulent Transport in FRCs

    NASA Astrophysics Data System (ADS)

    Bao, Jian; Lau, Calvin; Kuley, Animesh; Lin, Zhihong; Fulton, Daniel; Tajima, Toshiki; Tri Alpha Energy, Inc. Team

    2017-10-01

    Collisional and turbulent transport in a field reversed configuration (FRC) is studied in global particle simulation by using GTC (gyrokinetic toroidal code). The global FRC geometry is incorporated in GTC by using a field-aligned mesh in cylindrical coordinates, which enables global simulation coupling core and scrape-off layer (SOL) across the separatrix. Furthermore, fully kinetic ions are implemented in GTC to treat magnetic-null point in FRC core. Both global simulation coupling core and SOL regions and independent SOL region simulation have been carried out to study turbulence. In this work, the ``logical sheath boundary condition'' is implemented to study parallel transport in the SOL. This method helps to relax time and spatial steps without resolving electron plasma frequency and Debye length, which enables turbulent transports simulation with sheath effects. We will study collisional and turbulent SOL parallel transport with mirror geometry and sheath boundary condition in C2-W divertor.

  12. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  13. Large-Eddy Simulation of Waked Turbines in a Scaled Wind Farm Facility

    NASA Astrophysics Data System (ADS)

    Wang, J.; McLean, D.; Campagnolo, F.; Yu, T.; Bottasso, C. L.

    2017-05-01

    The aim of this paper is to present the numerical simulation of waked scaled wind turbines operating in a boundary layer wind tunnel. The simulation uses a LES-lifting-line numerical model. An immersed boundary method in conjunction with an adequate wall model is used to represent the effects of both the wind turbine nacelle and tower, which are shown to have a considerable effect on the wake behavior. Multi-airfoil data calibrated at different Reynolds numbers are used to account for the lift and drag characteristics at the low and varying Reynolds conditions encountered in the experiments. The present study focuses on low turbulence inflow conditions and inflow non-uniformity due to wind tunnel characteristics, while higher turbulence conditions are considered in a separate study. The numerical model is validated by using experimental data obtained during test campaigns conducted with the scaled wind farm facility. The simulation and experimental results are compared in terms of power capture, rotor thrust, downstream velocity profiles and turbulence intensity.

  14. A model-based approach for the evaluation of vagal and sympathetic activities in a newborn lamb.

    PubMed

    Le Rolle, Virginie; Ojeda, David; Beuchée, Alain; Praud, Jean-Paul; Pladys, Patrick; Hernández, Alfredo I

    2013-01-01

    This paper proposes a baroreflex model and a recursive identification method to estimate the time-varying vagal and sympathetic contributions to heart rate variability during autonomic maneuvers. The baroreflex model includes baroreceptors, cardiovascular control center, parasympathetic and sympathetic pathways. The gains of the global afferent sympathetic and vagal pathways are identified recursively. The method has been validated on data from newborn lambs, which have been acquired during the application of an autonomic maneuver, without medication and under beta-blockers. Results show a close match between experimental and simulated signals under both conditions. The vagal and sympathetic contributions have been simulated and, as expected, it is possible to observe different baroreflex responses under beta-blockers compared to baseline conditions.

  15. Tracking the global maximum power point of PV arrays under partial shading conditions

    NASA Astrophysics Data System (ADS)

    Fennich, Meryem

    This thesis presents the theoretical and simulation studies of the global maximum power point tracking (MPPT) for photovoltaic systems under partial shading. The main goal is to track the maximum power point of the photovoltaic module so that the maximum possible power can be extracted from the photovoltaic panels. When several panels are connected in series with some of them shaded partially either due to clouds or shadows from neighboring buildings, several local maxima appear in the power vs. voltage curve. A power increment based MPPT algorithm is effective in identifying the global maximum from the several local maxima. Several existing MPPT algorithms are explored and the state-of-the-art power increment method is simulated and tested for various partial shading conditions. The current-voltage and power-voltage characteristics of the PV model are studied under different partial shading conditions, along with five different cases demonstrating how the MPPT algorithm performs when shading switches from one state to another. Each case is supplemented with simulation results. The method of tracking the Global MPP is based on controlling the DC-DC converter connected to the output of the PV array. A complete system simulation including the PV array, the direct current to direct current (DC-DC) converter and the MPPT is presented and tested using MATLAB software. The simulation results show that the MPPT algorithm works very well with the buck converter, while the boost converter needs further changes and implementation.

  16. Using interprofessional simulation to improve collaborative competences for nursing, physiotherapy, and respiratory therapy students.

    PubMed

    King, Judy; Beanlands, Sarah; Fiset, Valerie; Chartrand, Louise; Clarke, Shelley; Findlay, Tarra; Morley, Michelle; Summers, Ian

    2016-09-01

    Within the care of people living with respiratory conditions, nursing, physiotherapy, and respiratory therapy healthcare professionals routinely work in interprofessional teams. To help students prepare for their future professional roles, there is a need for them to be involved in interprofessional education. The purpose of this project was to compare two different methods of patient simulation in improving interprofessional competencies for students in nursing, physiotherapy, and respiratory therapy programmes. The Canadian Interprofessional Health Collaborative competencies of communication, collaboration, conflict resolution patient/family-centred care, roles and responsibilities, and team functioning were measured. Using a quasi-experimental pre-post intervention approach two different interprofessional workshops were compared: the combination of standardised and simulated patients, and exclusively standardised patients. Students from nursing, physiotherapy, and respiratory therapy programmes worked together in these simulation-based activities to plan and implement care for a patient with a respiratory condition. Key results were that participants in both years improved in their self-reported interprofessional competencies as measured by the Interprofessional Collaborative Competencies Attainment Survey (ICCAS). Participants indicated that they found their interprofessional teams did well with communication and collaboration. But the participants felt they could have better involved the patients and their family members in the patient's care. Regardless of method of patient simulation used, mannequin or standardised patients, students found the experience beneficial and appreciated the opportunity to better understand the roles of other healthcare professionals in working together to help patients living with respiratory conditions.

  17. Simulation Study on the Deflection Response of the 921A Steel thin plate under Explosive Impact Load

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Xiang; Chen, Fang; Han, Yan

    2018-03-01

    The Ship cabin would be subject to high-intensity shock wave load when it is attacked by anti-ship weapons, causing its side board damaged. The time course of the deflection of the thin plate made of 921A steel in different initial conditions under the impact load is researched by theoretical analysis and numerical simulation. According to the theory of elastic-plastic deformation of the thin plate, the dynamic response equation of the thin plate under the explosion impact load is established with the method of energy, and the theoretical calculation value is compared with the result from the simulation method. It proved that the theoretical calculation method has better reliability and accuracy in different boundary size.

  18. A novel CFS-PML boundary condition for transient electromagnetic simulation using a fictitious wave domain method

    NASA Astrophysics Data System (ADS)

    Hu, Yanpu; Egbert, Gary; Ji, Yanju; Fang, Guangyou

    2017-01-01

    In this study, we apply fictitious wave domain (FWD) methods, based on the correspondence principle for the wave and diffusion fields, to finite difference (FD) modeling of transient electromagnetic (TEM) diffusion problems for geophysical applications. A novel complex frequency shifted perfectly matched layer (PML) boundary condition is adapted to the FWD to truncate the computational domain, with the maximum electromagnetic wave propagation velocity in the FWD used to set the absorbing parameters for the boundary layers. Using domains of varying spatial extent we demonstrate that these boundary conditions offer significant improvements over simpler PML approaches, which can result in spurious reflections and large errors in the FWD solutions, especially for low frequencies and late times. In our development, resistive air layers are directly included in the FWD, allowing simulation of TEM responses in the presence of topography, as is commonly encountered in geophysical applications. We compare responses obtained by our new FD-FWD approach and with the spectral Lanczos decomposition method on 3-D resistivity models of varying complexity. The comparisons demonstrate that our absorbing boundary condition in FWD for the TEM diffusion problems works well even in complex high-contrast conductivity models.

  19. Boiling water jet outflow from a thin nozzle: spatial modeling

    NASA Astrophysics Data System (ADS)

    Bolotnova, R. Kh.; Korobchinskaya, V. A.

    2017-09-01

    This study presents dual-temperature two-phase model for liquid-vapor mixture with account for evaporation and inter-phase heat transfer (taken in single-velocity single-pressure approximation). Simulation was performed using the shock-capturing method and moving Lagrangian grids. Analysis was performed for simulated and experimental values of nucleation frequency (for refining the initial number and radius of microbubbles) which affect the evaporation rate. Validity of 2D and 1D simulation was examined through comparison with experimental data. The peculiarities of the water-steam formation at the initial stage of outflow through a thin nozzle were studied for different initial equilibrium states of water for the conditions close to chosen experimental conditions.

  20. Numerical Optimization Strategy for Determining 3D Flow Fields in Microfluidics

    NASA Astrophysics Data System (ADS)

    Eden, Alex; Sigurdson, Marin; Mezic, Igor; Meinhart, Carl

    2015-11-01

    We present a hybrid experimental-numerical method for generating 3D flow fields from 2D PIV experimental data. An optimization algorithm is applied to a theory-based simulation of an alternating current electrothermal (ACET) micromixer in conjunction with 2D PIV data to generate an improved representation of 3D steady state flow conditions. These results can be used to investigate mixing phenomena. Experimental conditions were simulated using COMSOL Multiphysics to solve the temperature and velocity fields, as well as the quasi-static electric fields. The governing equations were based on a theoretical model for ac electrothermal flows. A Nelder-Mead optimization algorithm was used to achieve a better fit by minimizing the error between 2D PIV experimental velocity data and numerical simulation results at the measurement plane. By applying this hybrid method, the normalized RMS velocity error between the simulation and experimental results was reduced by more than an order of magnitude. The optimization algorithm altered 3D fluid circulation patterns considerably, providing a more accurate representation of the 3D experimental flow field. This method can be generalized to a wide variety of flow problems. This research was supported by the Institute for Collaborative Biotechnologies through grant W911NF-09-0001 from the U.S. Army Research Office.

  1. Comparisons between physics-based, engineering, and statistical learning models for outdoor sound propagation.

    PubMed

    Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T

    2016-05-01

    Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.

  2. Development of an Improved Irrigation Subroutine in SWAT to Simulate the Hydrology of Rice Paddy Grown under Submerged Conditions

    NASA Astrophysics Data System (ADS)

    Muraleedharan, B. V.; Kathirvel, K.; Narasimhan, B.; Nallasamy, N. D.

    2014-12-01

    Soil Water Assessment Tool (SWAT) is a basin scale, distributed hydrological model commonly used to predict the effect of management decisions on the hydrologic response of watersheds. Hydrologic response is decided by the various components of water balance. In the case of watersheds located in south India as well as in several other tropical countries around the world, paddy is one of the dominant crop controlling the hydrologic response of a watershed. Hence, the suitability of SWAT in replicating the hydrology of paddy fields needs to be verified. Rice paddy fields are subjected to flooding method of irrigation, while the irrigation subroutines in SWAT are developed to simulate crops grown under non flooding conditions. Moreover irrigation is represented well in field scale models, while it is poorly represented within watershed models like SWAT. Reliable simulation of flooding method of irrigation and hydrology of the fields will assist in effective water resources management of rice paddy fields which are one of the major consumers of surface and ground water resources. The current study attempts to modify the irrigation subroutine in SWAT so as to simulate flooded irrigation condition. A field water balance study was conducted on representative fields located within Gadana, a subbasin located in Tamil Nadu (southern part of India) and dominated by rice paddy based irrigation systems. The water balance of irrigated paddy fields simulated with SWAT was compared with the water balance derived by rice paddy based crop growth model named ORYZA. The variation in water levels along with the soil moisture variation predicted by SWAT was evaluated with respect to the estimates derived from ORYZA. The water levels were further validated with field based water balance measurements taken on a daily scale. It was observed that the modified irrigation subroutine was able to simulate irrigation of rice paddy within SWAT in a realistic way compared to the existing method.

  3. Methods for developing time-series climate surfaces to drive topographically distributed energy- and water-balance models

    USGS Publications Warehouse

    Susong, D.; Marks, D.; Garen, D.

    1999-01-01

    Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.

  4. Two-way coupling of magnetohydrodynamic simulations with embedded particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Makwana, K. D.; Keppens, R.; Lapenta, G.

    2017-12-01

    We describe a method for coupling an embedded domain in a magnetohydrodynamic (MHD) simulation with a particle-in-cell (PIC) method. In this two-way coupling we follow the work of Daldorff et al. (2014) [19] in which the PIC domain receives its initial and boundary conditions from MHD variables (MHD to PIC coupling) while the MHD simulation is updated based on the PIC variables (PIC to MHD coupling). This method can be useful for simulating large plasma systems, where kinetic effects captured by particle-in-cell simulations are localized but affect global dynamics. We describe the numerical implementation of this coupling, its time-stepping algorithm, and its parallelization strategy, emphasizing the novel aspects of it. We test the stability and energy/momentum conservation of this method by simulating a steady-state plasma. We test the dynamics of this coupling by propagating plasma waves through the embedded PIC domain. Coupling with MHD shows satisfactory results for the fast magnetosonic wave, but significant distortion for the circularly polarized Alfvén wave. Coupling with Hall-MHD shows excellent coupling for the whistler wave. We also apply this methodology to simulate a Geospace Environmental Modeling (GEM) challenge type of reconnection with the diffusion region simulated by PIC coupled to larger scales with MHD and Hall-MHD. In both these cases we see the expected signatures of kinetic reconnection in the PIC domain, implying that this method can be used for reconnection studies.

  5. The role of the antecedent soil moisture condition on the distributed hydrologic modelling of the Toce alpine basin floods.

    NASA Astrophysics Data System (ADS)

    Ravazzani, G.; Montaldo, N.; Mancini, M.; Rosso, R.

    2003-04-01

    Event-based hydrologic models need the antecedent soil moisture condition, as critical boundary initial condition for flood simulation. Land-surface models (LSMs) have been developed to simulate mass and energy transfers, and to update the soil moisture condition through time from the solution of water and energy balance equations. They are recently used in distributed hydrologic modeling for flood prediction systems. Recent developments have made LSMs more complex by inclusion of more processes and controlling variables, increasing parameter number and uncertainty of their estimates. This also led to increasing of computational burden and parameterization of the distributed hydrologic models. In this study we investigate: 1) the role of soil moisture initial conditions in the modeling of Alpine basin floods; 2) the adequate complexity level of LSMs for the distributed hydrologic modeling of Alpine basin floods. The Toce basin is the case study; it is located in the North Piedmont (Italian Alps), and it has a total drainage area of 1534 km2 at Candoglia section. Three distributed hydrologic models of different level of complexity are developed and compared: two (TDLSM and SDLSM) are continuous models, one (FEST02) is an event model based on the simplified SCS-CN method for rainfall abstractions. In the TDLSM model a two-layer LSM computes both saturation and infiltration excess runoff, and simulates the evolution of the water table spatial distribution using the topographic index; in the SDLSM model a simplified one-layer distributed LSM only computes hortonian runoff, and doesn’t simulate the water table dynamic. All the three hydrologic models simulate the surface runoff propagation through the Muskingum-Cunge method. TDLSM and SDLSM models have been applied for the two-year (1996 and 1997) simulation period, during which two major floods occurred in the November 1996 and in the June 1997. The models have been calibrated and tested comparing simulated and observed hydrographs at Candoglia. Sensitivity analysis of the models to significant LSM parameters were also performed. The performances of the three models in the simulation of the two major floods are compared. Interestingly, the results indicate that the SDLSM model is able to sufficiently well predict the major floods of this Alpine basin; indeed, this model is a good compromise between the over-parameterized and too complex TDLSM model and the over-simplified FEST02 model.

  6. Numerical simulation of two-dimensional heat transfer in composite bodies with application to de-icing of aircraft components. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Chao, D. F. K.

    1983-01-01

    Transient, numerical simulations of the de-icing of composite aircraft components by electrothermal heating were performed for a two dimensional rectangular geometry. The implicit Crank-Nicolson formulation was used to insure stability of the finite-difference heat conduction equations and the phase change in the ice layer was simulated using the Enthalpy method. The Gauss-Seidel point iterative method was used to solve the system of difference equations. Numerical solutions illustrating de-icer performance for various composite aircraft structures and environmental conditions are presented. Comparisons are made with previous studies. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.

  7. Bridging the Transition from Mesoscale to Microscale Turbulence in Numerical Weather Prediction Models

    NASA Astrophysics Data System (ADS)

    Muñoz-Esparza, Domingo; Kosović, Branko; Mirocha, Jeff; van Beeck, Jeroen

    2014-12-01

    With a focus towards developing multiscale capabilities in numerical weather prediction models, the specific problem of the transition from the mesoscale to the microscale is investigated. For that purpose, idealized one-way nested mesoscale to large-eddy simulation (LES) experiments were carried out using the Weather Research and Forecasting model framework. It is demonstrated that switching from one-dimensional turbulent diffusion in the mesoscale model to three-dimensional LES mixing does not necessarily result in an instantaneous development of turbulence in the LES domain. On the contrary, very large fetches are needed for the natural transition to turbulence to occur. The computational burden imposed by these long fetches necessitates the development of methods to accelerate the generation of turbulence on a nested LES domain forced by a smooth mesoscale inflow. To that end, four new methods based upon finite amplitude perturbations of the potential temperature field along the LES inflow boundaries are developed, and investigated under convective conditions. Each method accelerated the development of turbulence within the LES domain, with two of the methods resulting in a rapid generation of production and inertial range energy content associated to microscales that is consistent with non-nested simulations using periodic boundary conditions. The cell perturbation approach, the simplest and most efficient of the best performing methods, was investigated further under neutral and stable conditions. Successful results were obtained in all the regimes, where satisfactory agreement of mean velocity, variances and turbulent fluxes, as well as velocity and temperature spectra, was achieved with reference non-nested simulations. In contrast, the non-perturbed LES solution exhibited important energy deficits associated to a delayed establishment of fully-developed turbulence. The cell perturbation method has negligible computational cost, significantly accelerates the generation of realistic turbulence, and requires minimal parameter tuning, with the necessary information relatable to mean inflow conditions provided by the mesoscale solution.

  8. Methods of the aerodynamical experiments with simulation of massflow-traction ratio of the power unit

    NASA Astrophysics Data System (ADS)

    Lokotko, A. V.

    2016-10-01

    Modeling massflow-traction characteristics of the power unit (PU) may be of interest in the study of aerodynamic characteristics (ADC) aircraft models with full dynamic likeness, and in the study of the effect of interference PU. These studies require the use of a number of processing methods. These include: 1) The method of delivery of the high-pressure body of jets model engines on the sensitive part of the aerodynamic balance. 2) The method of estimate accuracy and reliability of measurement thrust generated by the jet device. 3) The method of implementation of the simulator SU in modeling the external contours of the nacelle, and the conditions at the inlet and outlet. 4) The method of determining the traction simulator PU. 5) The method of determining the interference effect from the work of power unit on the ADC of model. 6) The method of producing hot jets of jet engines. The paper examines implemented in ITAM methodology applied to testing in a supersonic wind tunnel T-313.

  9. Reservoir property grids improve with geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogt, J.

    1993-09-01

    Visualization software, reservoir simulators and many other E and P software applications need reservoir property grids as input. Using geostatistics, as compared to other gridding methods, to produce these grids leads to the best output from the software programs. For the purpose stated herein, geostatistics is simply two types of gridding methods. Mathematically, these methods are based on minimizing or duplicating certain statistical properties of the input data. One geostatical method, called kriging, is used when the highest possible point-by-point accuracy is desired. The other method, called conditional simulation, is used when one wants statistics and texture of the resultingmore » grid to be the same as for the input data. In the following discussion, each method is explained, compared to other gridding methods, and illustrated through example applications. Proper use of geostatistical data in flow simulations, use of geostatistical data for history matching, and situations where geostatistics has no significant advantage over other methods, also will be covered.« less

  10. Estimating pathway-specific contributions to biodegradation in aquifers based on dual isotope analysis: theoretical analysis and reactive transport simulations.

    PubMed

    Centler, Florian; Heße, Falk; Thullner, Martin

    2013-09-01

    At field sites with varying redox conditions, different redox-specific microbial degradation pathways contribute to total contaminant degradation. The identification of pathway-specific contributions to total contaminant removal is of high practical relevance, yet difficult to achieve with current methods. Current stable-isotope-fractionation-based techniques focus on the identification of dominant biodegradation pathways under constant environmental conditions. We present an approach based on dual stable isotope data to estimate the individual contributions of two redox-specific pathways. We apply this approach to carbon and hydrogen isotope data obtained from reactive transport simulations of an organic contaminant plume in a two-dimensional aquifer cross section to test the applicability of the method. To take aspects typically encountered at field sites into account, additional simulations addressed the effects of transverse mixing, diffusion-induced stable-isotope fractionation, heterogeneities in the flow field, and mixing in sampling wells on isotope-based estimates for aerobic and anaerobic pathway contributions to total contaminant biodegradation. Results confirm the general applicability of the presented estimation method which is most accurate along the plume core and less accurate towards the fringe where flow paths receive contaminant mass and associated isotope signatures from the core by transverse dispersion. The presented method complements the stable-isotope-fractionation-based analysis toolbox. At field sites with varying redox conditions, it provides a means to identify the relative importance of individual, redox-specific degradation pathways. © 2013.

  11. A penalty-based nodal discontinuous Galerkin method for spontaneous rupture dynamics

    NASA Astrophysics Data System (ADS)

    Ye, R.; De Hoop, M. V.; Kumar, K.

    2017-12-01

    Numerical simulation of the dynamic rupture processes with slip is critical to understand the earthquake source process and the generation of ground motions. However, it can be challenging due to the nonlinear friction laws interacting with seismicity, coupled with the discontinuous boundary conditions across the rupture plane. In practice, the inhomogeneities in topography, fault geometry, elastic parameters and permiability add extra complexity. We develop a nodal discontinuous Galerkin method to simulate seismic wave phenomenon with slipping boundary conditions, including the fluid-solid boundaries and ruptures. By introducing a novel penalty flux, we avoid solving Riemann problems on interfaces, which makes our method capable for general anisotropic and poro-elastic materials. Based on unstructured tetrahedral meshes in 3D, the code can capture various geometries in geological model, and use polynomial expansion to achieve high-order accuracy. We consider the rate and state friction law, in the spontaneous rupture dynamics, as part of a nonlinear transmitting boundary condition, which is weakly enforced across the fault surface as numerical flux. An iterative coupling scheme is developed based on implicit time stepping, containing a constrained optimization process that accounts for the nonlinear part. To validate the method, we proof the convergence of the coupled system with error estimates. We test our algorithm on a well-established numerical example (TPV102) of the SCEC/USGS Spontaneous Rupture Code Verification Project, and benchmark with the simulation of PyLith and SPECFEM3D with agreeable results.

  12. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling

    PubMed Central

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082

  13. 3D numerical simulation of transient processes in hydraulic turbines

    NASA Astrophysics Data System (ADS)

    Cherny, S.; Chirkov, D.; Bannikov, D.; Lapin, V.; Skorospelov, V.; Eshkunova, I.; Avdushenko, A.

    2010-08-01

    An approach for numerical simulation of 3D hydraulic turbine flows in transient operating regimes is presented. The method is based on a coupled solution of incompressible RANS equations, runner rotation equation, and water hammer equations. The issue of setting appropriate boundary conditions is considered in detail. As an illustration, the simulation results for runaway process are presented. The evolution of vortex structure and its effect on computed runaway traces are analyzed.

  14. Chapter 10 - Using simulation modeling to assess historical reference conditions for vegetation and fire regimes for the LANDFIRE Prototype Project

    Treesearch

    Sarah Pratt; Lisa Holsinger; Robert E. Keane

    2006-01-01

    A critical component of the Landscape Fire and Resource Management Planning Tools Prototype Project, or LANDFIRE Prototype Project, was the development of a nationally consistent method for estimating historical reference conditions for vegetation composition and structure and wildland fire regimes. These estimates of past vegetation composition and condition are used...

  15. Evaluating performance of risk identification methods through a large-scale simulation of observational data.

    PubMed

    Ryan, Patrick B; Schuemie, Martijn J

    2013-10-01

    There has been only limited evaluation of statistical methods for identifying safety risks of drug exposure in observational healthcare data. Simulations can support empirical evaluation, but have not been shown to adequately model the real-world phenomena that challenge observational analyses. To design and evaluate a probabilistic framework (OSIM2) for generating simulated observational healthcare data, and to use this data for evaluating the performance of methods in identifying associations between drug exposure and health outcomes of interest. Seven observational designs, including case-control, cohort, self-controlled case series, and self-controlled cohort design were applied to 399 drug-outcome scenarios in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively. Longitudinal data for 10 million simulated patients were generated using a model derived from an administrative claims database, with associated demographics, periods of drug exposure derived from pharmacy dispensings, and medical conditions derived from diagnoses on medical claims. Simulation validation was performed through descriptive comparison with real source data. Method performance was evaluated using Area Under ROC Curve (AUC), bias, and mean squared error. OSIM2 replicates prevalence and types of confounding observed in real claims data. When simulated data are injected with relative risks (RR) ≥ 2, all designs have good predictive accuracy (AUC > 0.90), but when RR < 2, no methods achieve 100 % predictions. Each method exhibits a different bias profile, which changes with the effect size. OSIM2 can support methodological research. Results from simulation suggest method operating characteristics are far from nominal properties.

  16. DSMC simulations of shock interactions about sharp double cones

    NASA Astrophysics Data System (ADS)

    Moss, James N.

    2001-08-01

    This paper presents the results of a numerical study of shock interactions resulting from Mach 10 flow about sharp double cones. Computations are made by using the direct simulation Monte Carlo (DSMC) method of Bird. The sensitivity and characteristics of the interactions are examined by varying flow conditions, model size, and configuration. The range of conditions investigated includes those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel.

  17. DSMC Simulations of Shock Interactions About Sharp Double Cones

    NASA Technical Reports Server (NTRS)

    Moss, James N.

    2000-01-01

    This paper presents the results of a numerical study of shock interactions resulting from Mach 10 flow about sharp double cones. Computations are made by using the direct simulation Monte Carlo (DSMC) method of Bird. The sensitivity and characteristics of the interactions are examined by varying flow conditions, model size, and configuration. The range of conditions investigated includes those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel.

  18. Evaluation of the Inertial Response of Variable-Speed Wind Turbines Using Advanced Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholbrock, Andrew K; Muljadi, Eduard; Gevorgian, Vahan

    In this paper, we focus on the temporary frequency support effect provided by wind turbine generators (WTGs) through the inertial response. With the implemented inertial control methods, the WTG is capable of increasing its active power output by releasing parts of the stored kinetic energy when the frequency excursion occurs. The active power can be boosted temporarily above the maximum power points, but the rotor speed deceleration follows and an active power output deficiency occurs during the restoration of rotor kinetic energy. We evaluate and compare the inertial response induced by two distinct inertial control methods using advanced simulation. Inmore » the first stage, the proposed inertial control methods are analyzed in offline simulation. Using an advanced wind turbine simulation program, FAST with TurbSim, the response of the researched wind turbine is comprehensively evaluated under turbulent wind conditions, and the impact on the turbine mechanical components are assessed. In the second stage, the inertial control is deployed on a real 600kW wind turbine - Controls Advanced Research Turbine, 3-bladed (CART3), which further verifies the inertial control through a hardware-in-the-loop (HIL) simulation. Various inertial control methods can be effectively evaluated based on the proposed two-stage simulation platform, which combines the offline simulation and real-time HIL simulation. The simulation results also provide insights in designing inertial control for WTGs.« less

  19. Enhanced conformational sampling of nucleic acids by a new Hamiltonian replica exchange molecular dynamics approach.

    PubMed

    Curuksu, Jeremy; Zacharias, Martin

    2009-03-14

    Although molecular dynamics (MD) simulations have been applied frequently to study flexible molecules, the sampling of conformational states separated by barriers is limited due to currently possible simulation time scales. Replica-exchange (Rex)MD simulations that allow for exchanges between simulations performed at different temperatures (T-RexMD) can achieve improved conformational sampling. However, in the case of T-RexMD the computational demand grows rapidly with system size. A Hamiltonian RexMD method that specifically enhances coupled dihedral angle transitions has been developed. The method employs added biasing potentials as replica parameters that destabilize available dihedral substates and was applied to study coupled dihedral transitions in nucleic acid molecules. The biasing potentials can be either fixed at the beginning of the simulation or optimized during an equilibration phase. The method was extensively tested and compared to conventional MD simulations and T-RexMD simulations on an adenine dinucleotide system and on a DNA abasic site. The biasing potential RexMD method showed improved sampling of conformational substates compared to conventional MD simulations similar to T-RexMD simulations but at a fraction of the computational demand. It is well suited to study systematically the fine structure and dynamics of large nucleic acids under realistic conditions including explicit solvent and ions and can be easily extended to other types of molecules.

  20. Microscale Modeling of Porous Thermal Protection System Materials

    NASA Astrophysics Data System (ADS)

    Stern, Eric C.

    Ablative thermal protection system (TPS) materials play a vital role in the design of entry vehicles. Most simulation tools for ablative TPS in use today take a macroscopic approach to modeling, which involves heavy empiricism. Recent work has suggested improving the fidelity of the simulations by taking a multi-scale approach to the physics of ablation. In this work, a new approach for modeling ablative TPS at the microscale is proposed, and its feasibility and utility is assessed. This approach uses the Direct Simulation Monte Carlo (DSMC) method to simulate the gas flow through the microstructure, as well as the gas-surface interaction. Application of the DSMC method to this problem allows the gas phase dynamics---which are often rarefied---to be modeled to a high degree of fidelity. Furthermore this method allows for sophisticated gas-surface interaction models to be implemented. In order to test this approach for realistic materials, a method for generating artificial microstructures which emulate those found in spacecraft TPS is developed. Additionally, a novel approach for allowing the surface to move under the influence of chemical reactions at the surface is developed. This approach is shown to be efficient and robust for performing coupled simulation of the oxidation of carbon fibers. The microscale modeling approach is first applied to simulating the steady flow of gas through the porous medium. Predictions of Darcy permeability for an idealized microstructure agree with empirical correlations from the literature, as well as with predictions from computational fluid dynamics (CFD) when the continuum assumption is valid. Expected departures are observed for conditions at which the continuum assumption no longer holds. Comparisons of simulations using a fabricated microstructure to experimental data for a real spacecraft TPS material show good agreement when similar microstructural parameters are used to build the geometry. The approach is then applied to investigating the ablation of porous materials through oxidation. A simple gas surface interaction model is described, and an approach for coupling the surface reconstruction algorithm to the DSMC method is outlined. Simulations of single carbon fibers at representative conditions suggest this approach to be feasible for simulating the ablation of porous TPS materials at scale. Additionally, the effect of various simulation parameters on in-depth morphology is investigated for random fibrous microstructures.

  1. Simulation of Structural Transformations in Heating of Alloy Steel

    NASA Astrophysics Data System (ADS)

    Kurkin, A. S.; Makarov, E. L.; Kurkin, A. B.; Rubtsov, D. E.; Rubtsov, M. E.

    2017-07-01

    Amathematical model for computer simulation of structural transformations in an alloy steel under the conditions of the thermal cycle of multipass welding is presented. The austenitic transformation under the heating and the processes of decomposition of bainite and martensite under repeated heating are considered. Amethod for determining the necessary temperature-time parameters of the model from the chemical composition of the steel is described. Published data are processed and the results used to derive regression models of the temperature ranges and parameters of transformation kinetics of alloy steels. The method developed is used in computer simulation of the process of multipass welding of pipes by the finite-element method.

  2. Development of a Hybrid RANS/LES Method for Compressible Mixing Layer Simulations

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli

    2001-01-01

    A hybrid method has been developed for simulations of compressible turbulent mixing layers. Such mixing layers dominate the flows in exhaust systems of modem day aircraft and also those of hypersonic vehicles currently under development. The hybrid method uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall bounded regions entering a mixing section, and a Large Eddy Simulation (LES) procedure to calculate the mixing dominated regions. A numerical technique was developed to enable the use of the hybrid RANS/LES method on stretched, non-Cartesian grids. The hybrid RANS/LES method is applied to a benchmark compressible mixing layer experiment. Preliminary two-dimensional calculations are used to investigate the effects of axial grid density and boundary conditions. Actual LES calculations, performed in three spatial directions, indicated an initial vortex shedding followed by rapid transition to turbulence, which is in agreement with experimental observations.

  3. Simulating the IPOD, East Asian summer monsoon, and their relationships in CMIP5

    NASA Astrophysics Data System (ADS)

    Yu, Miao; Li, Jianping; Zheng, Fei; Wang, Xiaofan; Zheng, Jiayu

    2018-03-01

    This paper evaluates the simulation performance of the 37 coupled models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) with respect to the East Asian summer monsoon (EASM) and the Indo-Pacific warm pool and North Pacific Ocean dipole (IPOD) and also the interrelationships between them. The results show that the majority of the models are unable to accurately simulate the interannual variability and long-term trends of the EASM, and their simulations of the temporal and spatial variations of the IPOD are also limited. Further analysis showed that the correlation coefficients between the simulated and observed EASM index (EASMI) is proportional to those between the simulated and observed IPOD index (IPODI); that is, if the models have skills to simulate one of them then they will likely generate good simulations of another. Based on the above relationship, this paper proposes a conditional multi-model ensemble method (CMME) that eliminates those models without capability to simulate the IPOD and EASM when calculating the multi-model ensemble (MME). The analysis shows that, compared with the MME, this CMME method can significantly improve the simulations of the spatial and temporal variations of both the IPOD and EASM as well as their interrelationship, suggesting the potential for the CMME approach to be used in place of the MME method.

  4. 14 CFR 27.725 - Limit drop test.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Each landing gear unit must be tested in the attitude simulating the landing condition that is most... rotorcraft in the most critical attitude. A rational method may be used in computing a main gear static... with the rotorcraft in the maximum nose-up attitude considered in the nose-up landing conditions. h...

  5. Cubic spline anchored grid pattern algorithm for high-resolution detection of subsurface cavities by the IR-CAT method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassab, A.J.; Pollard, J.E.

    An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less

  6. Numerical simulation of controlled directional solidification under microgravity conditions

    NASA Astrophysics Data System (ADS)

    Holl, S.; Roos, D.; Wein, J.

    The computer-assisted simulation of solidification processes influenced by gravity has gained increased importance during the previous years regarding ground-based as well as microgravity research. Depending on the specific needs of the investigator, the simulation model ideally covers a broad spectrum of applications. These primarily include the optimization of furnace design in interaction with selected process parameters to meet the desired crystallization conditions. Different approaches concerning the complexity of the simulation models as well as their dedicated applications will be discussed in this paper. Special emphasis will be put on the potential of software tools to increase the scientific quality and cost-efficiency of microgravity experimentation. The results gained so far in the context of TEXUS, FSLP, D-1 and D-2 (preparatory program) experiments, highlighting their simulation-supported preparation and evaluation will be discussed. An outlook will then be given on the possibilities to enhance the efficiency of pre-industrial research in the Columbus era through the incorporation of suitable simulation methods and tools.

  7. Dynamical Core in Atmospheric Model Does Matter in the Simulation of Arctic Climate

    NASA Astrophysics Data System (ADS)

    Jun, Sang-Yoon; Choi, Suk-Jin; Kim, Baek-Min

    2018-03-01

    Climate models using different dynamical cores can simulate significantly different winter Arctic climates even if equipped with virtually the same physics schemes. Current climate simulated by the global climate model using cubed-sphere grid with spectral element method (SE core) exhibited significantly warmer Arctic surface air temperature compared to that using latitude-longitude grid with finite volume method core. Compared to the finite volume method core, SE core simulated additional adiabatic warming in the Arctic lower atmosphere, and this was consistent with the eddy-forced secondary circulation. Downward longwave radiation further enhanced Arctic near-surface warming with a higher surface air temperature of about 1.9 K. Furthermore, in the atmospheric response to the reduced sea ice conditions with the same physical settings, only the SE core showed a robust cooling response over North America. We emphasize that special attention is needed in selecting the dynamical core of climate models in the simulation of the Arctic climate and associated teleconnection patterns.

  8. Simulations of Coulomb systems with slab geometry using an efficient 3D Ewald summation method

    NASA Astrophysics Data System (ADS)

    dos Santos, Alexandre P.; Girotto, Matheus; Levin, Yan

    2016-04-01

    We present a new approach to efficiently simulate electrolytes confined between infinite charged walls using a 3d Ewald summation method. The optimal performance is achieved by separating the electrostatic potential produced by the charged walls from the electrostatic potential of electrolyte. The electric field produced by the 3d periodic images of the walls is constant inside the simulation cell, with the field produced by the transverse images of the charged plates canceling out. The non-neutral confined electrolyte in an external potential can be simulated using 3d Ewald summation with a suitable renormalization of the electrostatic energy, to remove a divergence, and a correction that accounts for the conditional convergence of the resulting lattice sum. The new algorithm is at least an order of magnitude more rapid than the usual simulation methods for the slab geometry and can be further sped up by adopting a particle-particle particle-mesh approach.

  9. Molecular simulations of Crussard curves of detonation product mixtures at chemical equilibrium: Microscopic calculation of the Chapman-Jouguet state

    NASA Astrophysics Data System (ADS)

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-06-01

    The simultaneous use of the Reaction Ensemble Monte Carlo (ReMC) method and the Adaptative Erpenbeck EOS (AE-EOS) method allows us to calculate direclty the thermodynamical and chemical equilibrium of a mixture on the hugoniot curve. The ReMC method allow to reach chemical equilibrium of detonation products and the AE-EOS method constraints ths system to satisfy the Hugoniot relation. Once the Crussard curve of detonation products has been established, CJ state properties may be calculated. An additional NPT simulation is performed at CJ conditions in order to compute derivative thermodynamic quantities like Cp, Cv, Gruneisen gama, sound velocity, and compressibility factor. Several explosives has been studied, of which PETN, nitromethane, tetranitromethane, and hexanitroethane. In these first simulations, solid carbon is eventually treated using an EOS.

  10. Gas-injection-start and shutdown characteristics of a 2-kilowatt to 15-kilowatt Brayton power system

    NASA Technical Reports Server (NTRS)

    Cantoni, D. A.

    1972-01-01

    Two methods of starting the Brayton power system have been considered: (1) using the alternator as a motor to spin the Brayton rotating unit (BRU), and (2) spinning the BRU by forced gas injection. The first method requires the use of an auxiliary electrical power source. An alternating voltage is applied to the terminals of the alternator to drive it as an induction motor. Only gas-injection starts are discussed in this report. The gas-injection starting method requires high-pressure gas storage and valves to route the gas flow to provide correct BRU rotation. An analog computer simulation was used to size hardware and to determine safe start and shutdown procedures. The simulation was also used to define the range of conditions for successful startups. Experimental data were also obtained under various test conditions. These data verify the validity of the start and shutdown procedures.

  11. On Efficient Multigrid Methods for Materials Processing Flows with Small Particles

    NASA Technical Reports Server (NTRS)

    Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael

    2004-01-01

    Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.

  12. A new technique for simulating composite material

    NASA Technical Reports Server (NTRS)

    Volakis, John L.

    1991-01-01

    This project dealt with the development on new methodologies and algorithms for the multi-spectrum electromagnetic characterization of large scale nonmetallic airborne vehicles and structures. A robust, low memory, and accurate methodology was developed which is particularly suited for modern machine architectures. This is a hybrid finite element method that combines two well known numerical solution approaches. That of the finite element method for modeling volumes and the boundary integral method which yields exact boundary conditions for terminating the finite element mesh. In addition, a variety of high frequency results were generated (such as diffraction coefficients for impedance surfaces and material layers) and a class of boundary conditions were developed which hold promise for more efficient simulations. During the course of this project, nearly 25 detailed research reports were generated along with an equal number of journal papers. The reports, papers, and journal articles are listed in the appendices along with their abstracts.

  13. A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising

    PubMed Central

    Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin

    2015-01-01

    Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932

  14. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  15. Large eddy simulation of transitional flow in an idealized stenotic blood vessel: evaluation of subgrid scale models.

    PubMed

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H

    2014-07-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.

  16. Identifiability and Identification of Trace Continuous Pollutant Source

    PubMed Central

    Qu, Hongquan; Liu, Shouwen; Pang, Liping; Hu, Tao

    2014-01-01

    Accidental pollution events often threaten people's health and lives, and a pollutant source is very necessary so that prompt remedial actions can be taken. In this paper, a trace continuous pollutant source identification method is developed to identify a sudden continuous emission pollutant source in an enclosed space. The location probability model is set up firstly, and then the identification method is realized by searching a global optimal objective value of the location probability. In order to discuss the identifiability performance of the presented method, a conception of a synergy degree of velocity fields is presented in order to quantitatively analyze the impact of velocity field on the identification performance. Based on this conception, some simulation cases were conducted. The application conditions of this method are obtained according to the simulation studies. In order to verify the presented method, we designed an experiment and identified an unknown source appearing in the experimental space. The result showed that the method can identify a sudden trace continuous source when the studied situation satisfies the application conditions. PMID:24892041

  17. Wet cooling towers: rule-of-thumb design and simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leeper, Stephen A.

    1981-07-01

    A survey of wet cooling tower literature was performed to develop a simplified method of cooling tower design and simulation for use in power plant cycle optimization. The theory of heat exchange in wet cooling towers is briefly summarized. The Merkel equation (the fundamental equation of heat transfer in wet cooling towers) is presented and discussed. The cooling tower fill constant (Ka) is defined and values derived. A rule-of-thumb method for the optimized design of cooling towers is presented. The rule-of-thumb design method provides information useful in power plant cycle optimization, including tower dimensions, water consumption rate, exit air temperature,more » power requirements and construction cost. In addition, a method for simulation of cooling tower performance at various operating conditions is presented. This information is also useful in power plant cycle evaluation. Using the information presented, it will be possible to incorporate wet cooling tower design and simulation into a procedure to evaluate and optimize power plant cycles.« less

  18. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    PubMed

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  19. Advanced Computational Techniques for Hypersonic Propulsion

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1996-01-01

    CFD has played a major role in the resurgence of hypersonic flight, on the premise that numerical methods will allow us to perform simulations at conditions for which no ground test capability exists. Validation of CFD methods is being established using the experimental data base available, which is below Mach 8. It is important, however, to realize the limitations involved in the extrapolation process as well as the deficiencies that exist in numerical methods at the present time. Current features of CFD codes are examined for application to propulsion system components. The shortcomings in simulation and modeling are identified and discussed.

  20. Study on unsteady hydrodynamic performance of propeller in waves

    NASA Astrophysics Data System (ADS)

    Zhao, Qingxin; Guo, Chunyu; Su, Yumin; Liu, Tian; Meng, Xiangyin

    2017-09-01

    The speed of a ship sailing in waves always slows down due to the decrease in efficiency of the propeller. So it is necessary and essential to analyze the unsteady hydrodynamic performance of propeller in waves. This paper is based on the numerical simulation and experimental research of hydrodynamics performance when the propeller is under wave conditions. Open-water propeller performance in calm water is calculated by commercial codes and the results are compared to experimental values to evaluate the accuracy of the numerical simulation method. The first-order Volume of Fluid (VOF) wave method in STAR CCM+ is utilized to simulate the three-dimensional numerical wave. According to the above prerequisite, the numerical calculation of hydrodynamic performance of the propeller under wave conditions is conducted, and the results reveal that both thrust and torque of the propeller under wave conditions reveal intense unsteady behavior. With the periodic variation of waves, ventilation, and even an effluent phenomenon appears on the propeller. Calculation results indicate, when ventilation or effluent appears, the numerical calculation model can capture the dynamic characteristics of the propeller accurately, thus providing a significant theory foundation for further studying the hydrodynamic performance of a propeller in waves.

  1. Numerical Simulation of Dynamic Contact Angles and Contact Lines in Multiphase Flows using Level Set Method

    NASA Astrophysics Data System (ADS)

    Pendota, Premchand

    Many physical phenomena and industrial applications involve multiphase fluid flows and hence it is of high importance to be able to simulate various aspects of these flows accurately. The Dynamic Contact Angles (DCA) and the contact lines at the wall boundaries are a couple of such important aspects. In the past few decades, many mathematical models were developed for predicting the contact angles of the inter-face with the wall boundary under various flow conditions. These models are used to incorporate the physics of DCA and contact line motion in numerical simulations using various interface capturing/tracking techniques. In the current thesis, a simple approach to incorporate the static and dynamic contact angle boundary conditions using the level set method is developed and implemented in multiphase CFD codes, LIT (Level set Interface Tracking) (Herrmann (2008)) and NGA (flow solver) (Desjardins et al (2008)). Various DCA models and associated boundary conditions are reviewed. In addition, numerical aspects such as the occurrence of a stress singularity at the contact lines and grid convergence of macroscopic interface shape are dealt with in the context of the level set approach.

  2. CME Simulations with Boundary Conditions Derived from Multiple Viewpoints of STEREO

    NASA Astrophysics Data System (ADS)

    Singh, T.; Yalim, M. S.; Pogorelov, N. V.

    2017-12-01

    Coronal Mass Ejections (CMEs) are major drivers of extreme space weather conditions, which is a matter of huge concern for our modern technologically dependent society. Development of numerical approaches that would reproduce CME propagation through the interplanetary space is an important step towards our capability to predict CME arrival time at Earth and their geo-effectiveness. It is also important that CMEs are propagating through a realistic, data-driven background solar wind (SW). In this study, we use a version of the flux-rope-driven Gibson-Low (GL) model to simulate CMEs. We derive inner boundary conditions for the GL flux rope model using the Graduate Cylindrical Shell (GCS) method. This method uses viewpoints from STEREO A and B, and SOHO/LASCO coronagraphs to determine the size and orientation of a CME flux rope as it starts to erupt from Sun. A flux rope created this way is inserted into an SDO/HMI vector magnetogram driven SW background obtained with the Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS). Numerical results are compared with STEREO, SDO/AIA and SOHO/LASCO observations in particular in terms of the CME speed, acceleration and magnetic field structure.

  3. Assessment of a ground water flow model of the Bangkok Basin, Thailand, using carbon-14-based ages and paleohydrology

    USGS Publications Warehouse

    Sanford, W.E.; Buapeng, S.

    1996-01-01

    A study was undertaken to understand the groundwater flow conditions in the Bangkok Basin, Thailand, by comparing 14C-based and simulated groundwater ages. 14C measurements were made on about 50 water samples taken from wells throughout the basin. Simulated ages were obtained using 1) backward-pathline tracking based on the well locations, and 2) results from a three-dimensional groundwater flow model. Comparisons of ages at these locations reveal a large difference between 14C-based ages and ages predicted by the steady-state groundwater flow model. Mainly, 14C and 13C analyses indicate that groundwater in the Bangkok area is about 20,000 years old, whereas steady-state flow and transport simulations imply that groundwater in the Bangkok area is 50,000-100,000 years old. One potential reason for the discrepancy between simulated and 14C-based ages is the assumption in the model of steady-state flow. Groundwater velocities were probably greater in the region before about 10,000 years ago, during the last glacial maximum, because of the lower position of sea level and the absence of the surficial Bangkok Clay. Paleoflow conditions were estimated and then incorporated into a second set of simulations. The new assumption was that current steady-state flow conditions existed for the last 8,000 years but were preceded by steady-state conditions representative of flow during the last glacial maximum. This "transient" paleohydrologic simulation yielded a mean simulated age that more closely agrees with the mean 14C-based age, especially if the 14C-based age is corrected for diffusion into clay layers. Although the uncertainties in both the simulated and 14C-based ages are nontrivial, the magnitude of the improved match in the mean age using a paleohydrologic simulation instead of a steady-state simulation suggests that flow conditions in the basin have changed significantly over the last 10,000-20,000 years. Given that the valid age range of 14C-dating methods and the timing of the last glacial maximum are of similar magnitude, adjustments for paleohydrologic conditions may be required for many such studies.

  4. The Effect of Intra- Versus Post-Interview Feedback during Simulated Practice Interviews about Child Abuse

    ERIC Educational Resources Information Center

    Powell, Martine B.; Fisher, Ronald P.; Hughes-Scholes, Carolyn H.

    2008-01-01

    Objective: This study compared the effectiveness of two types of instructor feedback (relative to no feedback) on investigative interviewers' ability to adhere to open-ended questions in simulated practice interviews about child abuse. Method: In one condition, feedback was provided at the end of each practice interview. In the other, the…

  5. Self Diagnostic Adhesive for Bonded Joints in Aircraft Structures

    DTIC Science & Technology

    2016-10-04

    validated under the fatigue/dynamic loading condition. 3) Both SEM (Spectral Element Modeling) and FEM ( Finite Element Modeling) simulation of the...Sensors ..................................................................... 22 Parametric Study of Sensor Performance via Finite Element Simulation...The frequency range that we are interested is around 800 kHz. Conventional linear finite element method (FEM) requires a very fine spatial

  6. Parameter Estimation for a Turbulent Buoyant Jet Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Christopher, Jason D.; Wimer, Nicholas T.; Hayden, Torrey R. S.; Lapointe, Caelan; Grooms, Ian; Rieker, Gregory B.; Hamlington, Peter E.

    2016-11-01

    Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other "truth" data to be used for the prediction of unknown model parameters in numerical simulations of real-world engineering systems. In this presentation, we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a simulation with known boundary conditions and problem parameters. Using spatially-sparse temperature statistics from the 2D buoyant jet truth simulation, we show that the ABC method provides accurate predictions of the true jet inflow temperature. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for engineering fluid dynamics research.

  7. A Wigner Monte Carlo approach to density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.

    2014-08-01

    In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales verymore » well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.« less

  8. Non-parametric wall model and methods of identifying boundary conditions for moments in gas flow equations

    NASA Astrophysics Data System (ADS)

    Liao, Meng; To, Quy-Dong; Léonard, Céline; Monchiet, Vincent

    2018-03-01

    In this paper, we use the molecular dynamics simulation method to study gas-wall boundary conditions. Discrete scattering information of gas molecules at the wall surface is obtained from collision simulations. The collision data can be used to identify the accommodation coefficients for parametric wall models such as Maxwell and Cercignani-Lampis scattering kernels. Since these scattering kernels are based on a limited number of accommodation coefficients, we adopt non-parametric statistical methods to construct the kernel to overcome these issues. Different from parametric kernels, the non-parametric kernels require no parameter (i.e. accommodation coefficients) and no predefined distribution. We also propose approaches to derive directly the Navier friction and Kapitza thermal resistance coefficients as well as other interface coefficients associated with moment equations from the non-parametric kernels. The methods are applied successfully to systems composed of CH4 or CO2 and graphite, which are of interest to the petroleum industry.

  9. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevastianov, L. A., E-mail: sevast@sci.pfu.edu.ru; Egorov, A. A.; Sevastyanov, A. L.

    2013-02-15

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement'more » of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.« less

  10. Precise method of compensating radiation-induced errors in a hot-cathode-ionization gauge with correcting electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saeki, Hiroshi, E-mail: saeki@spring8.or.jp; Magome, Tamotsu, E-mail: saeki@spring8.or.jp

    2014-10-06

    To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method wasmore » approximately less than several percent in the pressure range from 10{sup −5} Pa to 10{sup −8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.« less

  11. Generation of dense plume fingers in saturated-unsaturated homogeneous porous media

    NASA Astrophysics Data System (ADS)

    Cremer, Clemens J. M.; Graf, Thomas

    2015-02-01

    Flow under variable-density conditions is widespread, occurring in geothermal reservoirs, at waste disposal sites or due to saltwater intrusion. The migration of dense plumes typically results in the formation of vertical plume fingers which are known to be triggered by material heterogeneity or by variations in source concentration that causes the density variation. Using a numerical groundwater model, six perturbation methods are tested under saturated and unsaturated flow conditions to mimic heterogeneity and concentration variations on the pore scale in order to realistically generate dense fingers. A laboratory-scale sand tank experiment is numerically simulated, and the perturbation methods are evaluated by comparing plume fingers obtained from the laboratory experiment with numerically simulated fingers. Dense plume fingering for saturated flow can best be reproduced with a spatially random, time-constant perturbation of the solute source. For unsaturated flow, a spatially and temporally random noise of solute concentration or a random conductivity field adequately simulate plume fingering.

  12. A comparative study of turbulence decay using Navier-Stokes and a discrete particle simulation

    NASA Technical Reports Server (NTRS)

    Goswami, A.; Baganoff, D.; Lele, S.; Feiereisen, W.

    1993-01-01

    A comparative study of the two dimensional temporal decay of an initial turbulent state of flow is presented using a direct Navier-Stokes simulation and a particle method, ranging from the near continuum to more rarefied regimes. Various topics related to matching the initial conditions between the two simulations are considered. The determination of the initial velocity distribution function in the particle method was found to play an important role in the comparison. This distribution was first developed by matching the initial Navier-Stokes state of stress, but was found to be inadequate beyond the near continuum regime. An alternative approach of using the Lees two-sided Maxwellian to match the initial strain-rate is discussed. Results of the comparison of the temporal decay of mean kinetic energy are presented for a range of Knudsen numbers. As expected, good agreement was observed for the near continuum regime, but the differences found for the more rarefied conditions were unexpectedly small.

  13. Simulation in production of open rotor propellers: from optimal surface geometry to automated control of mechanical treatment

    NASA Astrophysics Data System (ADS)

    Grinyok, A.; Boychuk, I.; Perelygin, D.; Dantsevich, I.

    2018-03-01

    A complex method of the simulation and production design of open rotor propellers was studied. An end-to-end diagram was proposed for the evaluating, designing and experimental testing the optimal geometry of the propeller surface, for the machine control path generation as well as for simulating the cutting zone force condition and its relationship with the treatment accuracy which was defined by the propeller elastic deformation. The simulation data provided the realization of the combined automated path control of the cutting tool.

  14. Numerical simulation for turbulent heating around the forebody fairing of H-II rocket

    NASA Astrophysics Data System (ADS)

    Nomura, Shigeaki; Yamamoto, Yukimitsu; Fukushima, Yukio

    Concerning the heat transfer distributions around the nose fairing of the Japanese new launch vehicle H-II rocket, numerical simulations have been conducted for the conditions along its nominal ascent trajectory and some experimental tests have been conducted additionally to confirm the numerical results. The thin layer approximated Navier-Stokes equations with Baldwin-Lomax's algebraic turbulent model were solved by the time dependent finite difference method. Results of numerical simulations showed that a high peak heating would occur near the stagnation point on the spherical nose portion due to the transition to turbulent flow during the period when large stagnation point heating was predicted. The experiments were conducted under the condition of M = 5 and Re = 10 to the 6th which was similar to the flight condition where the maximum stagnation point heating would occur. The experimental results also showed a high peak heating near the stagnation point over the spherical nose portion.

  15. Semi-analytical solution for the generalized absorbing boundary condition in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Lee, Chung-Shuo; Chen, Yan-Yu; Yu, Chi-Hua; Hsu, Yu-Chuan; Chen, Chuin-Shan

    2017-07-01

    We present a semi-analytical solution of a time-history kernel for the generalized absorbing boundary condition in molecular dynamics (MD) simulations. To facilitate the kernel derivation, the concept of virtual atoms in real space that can conform with an arbitrary boundary in an arbitrary lattice is adopted. The generalized Langevin equation is regularized using eigenvalue decomposition and, consequently, an analytical expression of an inverse Laplace transform is obtained. With construction of dynamical matrices in the virtual domain, a semi-analytical form of the time-history kernel functions for an arbitrary boundary in an arbitrary lattice can be found. The time-history kernel functions for different crystal lattices are derived to show the generality of the proposed method. Non-equilibrium MD simulations in a triangular lattice with and without the absorbing boundary condition are conducted to demonstrate the validity of the solution.

  16. Thermal Boundary Layer Effects on Line-of-Sight Tunable Diode Laser Absorption Spectroscopy (TDLAS) Gas Concentration Measurements.

    PubMed

    Qu, Zhechao; Werhahn, Olav; Ebert, Volker

    2018-06-01

    The effects of thermal boundary layers on tunable diode laser absorption spectroscopy (TDLAS) measurement results must be quantified when using the line-of-sight (LOS) TDLAS under conditions with spatial temperature gradient. In this paper, a new methodology based on spectral simulation is presented quantifying the LOS TDLAS measurement deviation under conditions with thermal boundary layers. The effects of different temperature gradients and thermal boundary layer thickness on spectral collisional widths and gas concentration measurements are quantified. A CO 2 TDLAS spectrometer, which has two gas cells to generate the spatial temperature gradients, was employed to validate the simulation results. The measured deviations and LOS averaged collisional widths are in very good agreement with the simulated results for conditions with different temperature gradients. We demonstrate quantification of thermal boundary layers' thickness with proposed method by exploitation of the LOS averaged the collisional width of the path-integrated spectrum.

  17. Simulation of elution profiles in liquid chromatography - II: Investigation of injection volume overload under gradient elution conditions applied to second dimension separations in two-dimensional liquid chromatography.

    PubMed

    Stoll, Dwight R; Sajulga, Ray W; Voigt, Bryan N; Larson, Eli J; Jeong, Lena N; Rutan, Sarah C

    2017-11-10

    An important research direction in the continued development of two-dimensional liquid chromatography (2D-LC) is to improve the detection sensitivity of the method. This is especially important in applications where injection of large volumes of effluent from the first dimension ( 1 D) column into the second dimension ( 2 D) column leads to severe 2 D peak broadening and peak shape distortion. For example, this is common when coupling two reversed-phase columns and the organic solvent content of the 1 D mobile phase overwhelms the 2 D column with each injection of 1 D effluent, leading to low resolution in the second dimension. In a previous study we validated a simulation approach based on the Craig distribution model and adapted from the work of Czok and Guiochon [1] that enabled accurate simulation of simple isocratic and gradient separations with very small injection volumes, and isocratic separations with mismatched injection and mobile phase solvents [2]. In the present study we have extended this simulation approach to simulate separations relevant to 2D-LC. Specifically, we have focused on simulating 2 D separations where gradient elution conditions are used, there is mismatch between the sample solvent and the starting point in the gradient elution program, injection volumes approach or even exceed the dead volume of the 2 D column, and the extent of sample loop filling is varied. To validate this simulation we have compared results from simulations and experiments for 101 different conditions, including variation in injection volume (0.4-80μL), loop filling level (25-100%), and degree of mismatch between sample organic solvent and the starting point in the gradient elution program (-20 to +20% ACN). We find that that the simulation is accurate enough (median errors in retention time and peak width of -1.0 and -4.9%, without corrections for extra-column dispersion) to be useful in guiding optimization of 2D-LC separations. However, this requires that real injection profiles obtained from 2D-LC interface valves are used to simulate the introduction of samples into the 2 D column. These profiles are highly asymmetric - simulation using simple rectangular pulses leads to peak widths that are far too narrow under many conditions. We believe the simulation approach developed here will be useful for addressing practical questions in the development of 2D-LC methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Some theoretical considerations of longitudinal stability in power-on flight with special reference to wind-tunnel testing, November 1942

    NASA Technical Reports Server (NTRS)

    Donlan, C. J.

    1976-01-01

    Some problems relating to longitudinal stability in power-on flight are considered. A derivation is included which shows that, under certain conditions, the rate of change of the pitching moment coefficient with lift coefficient as obtained in wind tunnel tests simulating constant power operation is directly proportional to one of the indices of stability commonly associated with flight analysis, (the slope of the curve relating the elevator angle for trim and lift coefficient). The necessity of analyzing power-on wind tunnel data for trim conditions is emphasized, and a method is provided for converting data obtained from constant thrust tests to simulated constant throttle flight conditions.

  19. Laboratory Simulations of Mars Evaporite Geochemistry

    NASA Technical Reports Server (NTRS)

    Moore, Jeffrey M.; Bullock, Mark A.; Newsom, Horton; Nelson, Melissa

    2010-01-01

    Evaporite-rich sedimentary deposits on Mars were formed under chemical conditions quite different from those on Earth. Their unique chemistries record the chemical and aqueous conditions under which they were formed and possibly subsequent conditions to which they were subjected. We have produced evaporite salt mineral suites in the laboratory under two simulated Martian atmospheres: (1) present-day and (2) a model of an ancient Matian atmosphere rich in volcanic gases. The composition of these synthetic Mars evaporites depends on the atmospheres under which they were desiccated as well as the chemistries of their precursor brines. In this report, we describe a Mars analog evaporite laboratory apparatus and the experimental methods we used to produce and analyze the evaporite mineral suites.

  20. Online Condition Monitoring of Gripper Cylinder in TBM Based on EMD Method

    NASA Astrophysics Data System (ADS)

    Li, Lin; Tao, Jian-Feng; Yu, Hai-Dong; Huang, Yi-Xiang; Liu, Cheng-Liang

    2017-11-01

    The gripper cylinder that provides braced force for Tunnel Boring Machine (TBM) might fail due to severe vibration when the TBM excavates in the tunnel. Early fault diagnosis of the gripper cylinder is important for the safety and efficiency of the whole tunneling project. In this paper, an online condition monitoring system based on the Empirical Mode Decomposition (EMD) method is established for fault diagnosis of the gripper cylinder while TBM is working. Firstly, the lumped mass parameter model of the gripper cylinder is established considering the influence of the variable stiffness at the rock interface, the equivalent stiffness of the oil, the seals, and the copper guide sleeve. The dynamic performance of the gripper cylinder is investigated to provide basis for its health condition evaluation. Then, the EMD method is applied to identify the characteristic frequencies of the gripper cylinder for fault diagnosis and a field test is used to verify the accuracy of the EMD method for detection of the characteristic frequencies. Furthermore, the contact stiffness at the interface between the barrel and the rod is calculated with Hertz theory and the relationship between the natural frequency and the stiffness varying with the health condition of the cylinder is simulated based on the dynamic model. The simulation shows that the characteristic frequencies decrease with the increasing clearance between the barrel and the rod, thus the defects could be indicated by monitoring the natural frequency. Finally, a health condition management system of the gripper cylinder based on the vibration signal and the EMD method is established, which could ensure the safety of TBM.

  1. Finite time step and spatial grid effects in δf simulation of warm plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.

    2016-01-15

    This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less

  2. A Study of Mars Dust Environment Simulation at NASA Johnson Space Center Energy Systems Test Area Resource Conversion Test Facility

    NASA Technical Reports Server (NTRS)

    Chen, Yuan-Liang Albert

    1999-01-01

    The dust environment on Mars is planned to be simulated in a 20 foot thermal-vacuum chamber at the Johnson Space Center, Energy Systems Test Area Resource Conversion Test Facility in Houston, Texas. This vacuum chamber will be used to perform tests and study the interactions between the dust in Martian air and ISPP hardware. This project is to research, theorize, quantify, and document the Mars dust/wind environment needed for the 20 foot simulation chamber. This simulation work is to support the safety, endurance, and cost reduction of the hardware for the future missions. The Martian dust environment conditions is discussed. Two issues of Martian dust, (1) Dust Contamination related hazards, and (2) Dust Charging caused electrical hazards, are of our interest. The different methods of dust particles measurement are given. The design trade off and feasibility were studied. A glass bell jar system is used to evaluate various concepts for the Mars dust/wind environment simulation. It was observed that the external dust source injection is the best method to introduce the dust into the simulation system. The dust concentration of 30 Mg/M3 should be employed for preparing for the worst possible Martian atmosphere condition in the future. Two approaches thermal-panel shroud for the hardware conditioning are discussed. It is suggested the wind tunnel approach be used to study the dust charging characteristics then to be apply to the close-system cyclone approach. For the operation cost reduction purpose, a dehumidified ambient air could be used to replace the expensive CO2 mixture for some tests.

  3. On the Validity of Continuum Computational Fluid Dynamics Approach Under Very Low-Pressure Plasma Spray Conditions

    NASA Astrophysics Data System (ADS)

    Ivchenko, Dmitrii; Zhang, Tao; Mariaux, Gilles; Vardelle, Armelle; Goutier, Simon; Itina, Tatiana E.

    2018-01-01

    Plasma spray physical vapor deposition aims to substantially evaporate powders in order to produce coatings with various microstructures. This is achieved by powder vapor condensation onto the substrate and/or by deposition of fine melted powder particles and nanoclusters. The deposition process typically operates at pressures ranging between 10 and 200 Pa. In addition to the experimental works, numerical simulations are performed to better understand the process and optimize the experimental conditions. However, the combination of high temperatures and low pressure with shock waves initiated by supersonic expansion of the hot gas in the low-pressure medium makes doubtful the applicability of the continuum approach for the simulation of such a process. This work investigates (1) effects of the pressure dependence of thermodynamic and transport properties on computational fluid dynamics (CFD) predictions and (2) the validity of the continuum approach for thermal plasma flow simulation under very low-pressure conditions. The study compares the flow fields predicted with a continuum approach using CFD software with those obtained by a kinetic-based approach using a direct simulation Monte Carlo method (DSMC). It also shows how the presence of high gradients can contribute to prediction errors for typical PS-PVD conditions.

  4. A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Hartley, Tom T.

    1998-01-01

    Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.

  5. Structural acoustic control of plates with variable boundary conditions: design methodology.

    PubMed

    Sprofera, Joseph D; Cabell, Randolph H; Gibbs, Gary P; Clark, Robert L

    2007-07-01

    A method for optimizing a structural acoustic control system subject to variations in plate boundary conditions is provided. The assumed modes method is used to build a plate model with varying levels of rotational boundary stiffness to simulate the dynamics of a plate with uncertain edge conditions. A transducer placement scoring process, involving Hankel singular values, is combined with a genetic optimization routine to find spatial locations robust to boundary condition variation. Predicted frequency response characteristics are examined, and theoretically optimized results are discussed in relation to the range of boundary conditions investigated. Modeled results indicate that it is possible to minimize the impact of uncertain boundary conditions in active structural acoustic control by optimizing the placement of transducers with respect to those uncertainties.

  6. Direct simulation of groundwater age

    USGS Publications Warehouse

    Goode, Daniel J.

    1996-01-01

    A new method is proposed to simulate groundwater age directly, by use of an advection-dispersion transport equation with a distributed zero-order source of unit (1) strength, corresponding to the rate of aging. The dependent variable in the governing equation is the mean age, a mass-weighted average age. The governing equation is derived from residence-time-distribution concepts for the case of steady flow. For the more general case of transient flow, a transient governing equation for age is derived from mass-conservation principles applied to conceptual “age mass.” The age mass is the product of the water mass and its age, and age mass is assumed to be conserved during mixing. Boundary conditions include zero age mass flux across all noflow and inflow boundaries and no age mass dispersive flux across outflow boundaries. For transient-flow conditions, the initial distribution of age must be known. The solution of the governing transport equation yields the spatial distribution of the mean groundwater age and includes diffusion, dispersion, mixing, and exchange processes that typically are considered only through tracer-specific solute transport simulation. Traditional methods have relied on advective transport to predict point values of groundwater travel time and age. The proposed method retains the simplicity and tracer-independence of advection-only models, but incorporates the effects of dispersion and mixing on volume-averaged age. Example simulations of age in two idealized regional aquifer systems, one homogeneous and the other layered, demonstrate the agreement between the proposed method and traditional particle-tracking approaches and illustrate use of the proposed method to determine the effects of diffusion, dispersion, and mixing on groundwater age.

  7. Application of solar energy to air conditioning systems

    NASA Technical Reports Server (NTRS)

    Nash, J. M.; Harstad, A. J.

    1976-01-01

    The results of a survey of solar energy system applications of air conditioning are summarized. Techniques discussed are both solar powered (absorption cycle and the heat engine/Rankine cycle) and solar related (heat pump). Brief descriptions of the physical implications of various air conditioning techniques, discussions of status, proposed technological improvements, methods of utilization and simulation models are presented, along with an extensive bibliography of related literature.

  8. Generalized non-equilibrium vertex correction method in coherent medium theory for quantum transport simulation of disordered nanoelectronics

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Ke, Youqi

    In realistic nanoelectronics, disordered impurities/defects are inevitable and play important roles in electron transport. However, due to the lack of effective quantum transport method, the important effects of disorders remain poorly understood. Here, we report a generalized non-equilibrium vertex correction (NVC) method with coherent potential approximation to treat the disorder effects in quantum transport simulation. With this generalized NVC method, any averaged product of two single-particle Green's functions can be obtained by solving a set of simple linear equations. As a result, the averaged non-equilibrium density matrix and various important transport properties, including averaged current, disordered induced current fluctuation and the averaged shot noise, can all be efficiently computed in a unified scheme. Moreover, a generalized form of conditionally averaged non-equilibrium Green's function is derived to incorporate with density functional theory to enable first-principles simulation. We prove the non-equilibrium coherent potential equals the non-equilibrium vertex correction. Our approach provides a unified, efficient and self-consistent method for simulating non-equilibrium quantum transport through disorder nanoelectronics. Shanghaitech start-up fund.

  9. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data

    PubMed Central

    He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei

    2017-01-01

    This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148

  10. Examining Solutions to Missing Data in Longitudinal Nursing Research

    PubMed Central

    Roberts, Mary B.; Sullivan, Mary C.; Winchester, Suzy B.

    2017-01-01

    Purpose Longitudinal studies are highly valuable in pediatrics because they provide useful data about developmental patterns of child health and behavior over time. When data are missing, the value of the research is impacted. The study’s purpose was to: (1) introduce a 3-step approach to assess and address missing data; (2) illustrate this approach using categorical and continuous level variables from a longitudinal study of premature infants. Methods A three-step approach with simulations was followed to assess the amount and pattern of missing data and to determine the most appropriate imputation method for the missing data. Patterns of missingness were Missing Completely at Random, Missing at Random, and Not Missing at Random. Missing continuous-level data were imputed using mean replacement, stochastic regression, multiple imputation, and fully conditional specification. Missing categorical-level data were imputed using last value carried forward, hot-decking, stochastic regression, and fully conditional specification. Simulations were used to evaluate these imputation methods under different patterns of missingness at different levels of missing data. Results The rate of missingness was 16–23% for continuous variables and 1–28% for categorical variables. Fully conditional specification imputation provided the least difference in mean and standard deviation estimates for continuous measures. Fully conditional specification imputation was acceptable for categorical measures. Results obtained through simulation reinforced and confirmed these findings. Practice Implications Significant investments are made in the collection of longitudinal data. The prudent handling of missing data can protect these investments and potentially improve the scientific information contained in pediatric longitudinal studies. PMID:28425202

  11. The Linked Neighbour List (LNL) method for fast off-lattice Monte Carlo simulations of fluids

    NASA Astrophysics Data System (ADS)

    Mazzeo, M. D.; Ricci, M.; Zannoni, C.

    2010-03-01

    We present a new algorithm, called linked neighbour list (LNL), useful to substantially speed up off-lattice Monte Carlo simulations of fluids by avoiding the computation of the molecular energy before every attempted move. We introduce a few variants of the LNL method targeted to minimise memory footprint or augment memory coherence and cache utilisation. Additionally, we present a few algorithms which drastically accelerate neighbour finding. We test our methods on the simulation of a dense off-lattice Gay-Berne fluid subjected to periodic boundary conditions observing a speedup factor of about 2.5 with respect to a well-coded implementation based on a conventional link-cell. We provide several implementation details of the different key data structures and algorithms used in this work.

  12. Crack propagation of brittle rock under high geostress

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Chu, Weijiang; Chen, Pingzhi

    2018-03-01

    Based on fracture mechanics and numerical methods, the characteristics and failure criterions of wall rock cracks including initiation, propagation, and coalescence are analyzed systematically under different conditions. In order to consider the interaction among cracks, adopt the sliding model of multi-cracks to simulate the splitting failure of rock in axial compress. The reinforcement of bolts and shotcrete supporting to rock mass can control the cracks propagation well. Adopt both theory analysis and simulation method to study the mechanism of controlling the propagation. The best fixed angle of bolts is calculated. Then use ansys to simulate the crack arrest function of bolt to crack. Analyze the influence of different factors on stress intensity factor. The method offer more scientific and rational criterion to evaluate the splitting failure of underground engineering under high geostress.

  13. Physical lumping methods for developing linear reduced models for high speed propulsion systems

    NASA Technical Reports Server (NTRS)

    Immel, S. M.; Hartley, Tom T.; Deabreu-Garcia, J. Alex

    1991-01-01

    In gasdynamic systems, information travels in one direction for supersonic flow and in both directions for subsonic flow. A shock occurs at the transition from supersonic to subsonic flow. Thus, to simulate these systems, any simulation method implemented for the quasi-one-dimensional Euler equations must have the ability to capture the shock. In this paper, a technique combining both backward and central differencing is presented. The equations are subsequently linearized about an operating point and formulated into a linear state space model. After proper implementation of the boundary conditions, the model order is reduced from 123 to less than 10 using the Schur method of balancing. Simulations comparing frequency and step response of the reduced order model and the original system models are presented.

  14. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  15. Aerodynamic characterisation and trajectory simulations for the Ariane-5 booster recovery system

    NASA Astrophysics Data System (ADS)

    Meiboom, F. P.

    One of the most critical aspects of the early phases of the development of the Ariane-5 booster recovery system was the determination of the behavior of the booster during its atmospheric reentry, since this behavior determines the start conditions for the parachute system elements. A combination of wind-tunnel tests (subsonic and supersonic) and analytical methods was applied to define the aerodynamic characteristics of the booster. This aerodynamic characterization in combination with information of the ascent trajectory, atmospheric properties and booster mass and inertia were used as input for the 6-DOF trajectory simulations of the vehicle. Uncertainties in aerodynamic properties and deviations in atmospheric and booster properties were incorporated to define the range of initial conditions for the parachute system, utilizing stochastic (Monte-Carlo) methods.

  16. Spectral simulation of unsteady compressible flow past a circular cylinder

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Gottlieb, David

    1990-01-01

    An unsteady compressible viscous wake flow past a circular cylinder was successfully simulated using spectral methods. A new approach in using the Chebyshev collocation method for periodic problems is introduced. It was further proved that the eigenvalues associated with the differentiation matrix are purely imaginary, reflecting the periodicity of the problem. It was been shown that the solution of a model problem has exponential growth in time if improper boundary conditions are used. A characteristic boundary condition, which is based on the characteristics of the Euler equations of gas dynamics, was derived for the spectral code. The primary vortex shedding frequency computed agrees well with the results in the literature for Mach = 0.4, Re = 80. No secondary frequency is observed in the power spectrum analysis of the pressure data.

  17. Simulation framework for electromagnetic effects in plasmonics, filter apertures, wafer scattering, grating mirrors, and nano-crystals

    NASA Astrophysics Data System (ADS)

    Ceperley, Daniel Peter

    This thesis presents a Finite-Difference Time-Domain simulation framework as well as both scientific observations and quantitative design data for emerging optical devices. These emerging applications required the development of simulation capabilities to carefully control numerical experimental conditions, isolate and quantifying specific scattering processes, and overcome memory and run-time limitations on large device structures. The framework consists of a new version 7 of TEMPEST and auxiliary tools implemented as Matlab scripts. In improving the geometry representation and absorbing boundary conditions in TEMPEST from v6 the accuracy has been sustained and key improvements have yielded application specific speed and accuracy improvements. These extensions include pulsed methods, PML for plasmon termination, and plasmon and scattered field sources. The auxiliary tools include application specific methods such as signal flow graphs of plasmon couplers, Bloch mode expansions of sub-wavelength grating waves, and back-propagation methods to characterize edge scattering in diffraction masks. Each application posed different numerical hurdles and physical questions for the simulation framework. The Terrestrial Planet Finder Coronagraph required accurate modeling of diffraction mask structures too large for solely FDTD analysis. This analysis was achieved through a combination of targeted TEMPEST simulations and full system simulator based on thin mask scalar diffraction models by Ball Aerospace for JPL. TEMPEST simulation showed that vertical sidewalls were the strongest scatterers, adding nearly 2lambda of light per mask edge, which could be reduced by 20° undercuts. TEMPEST assessment of coupling in rapid thermal annealing was complicated by extremely sub-wavelength features and fine meshes. Near 100% coupling and low variability was confirmed even in the presence of unidirectional dense metal gates. Accurate analysis of surface plasmon coupling efficiency by small surface features required capabilities to isolate these features and cleanly illuminate them with plasmons and plane-waves. These features were shown to have coupling cross-sections up to and slightly exceeding their physical size. Long run-times for TEMPEST simulations of finite length gratings were overcome with a signal flow graph method. With these methods a plasmon coupler with over a 10lambda 100% capture length was demonstrated. Simulation of 3D nano-particle arrays utilized TEMPEST v7's pulsed methods to minimize the number of multi-day simulations. These simulations led to the discovery that interstitial plasmons were responsible for resonant absorption and transmission but not reflection. Simulation of a sub-wavelength grating mirror using pulsed sources to map resonant spectra showed that neither coupled guided waves nor coupled isolated resonators accurately described the operation. However, a new model based on vertical propagation of lateral Bloch modes with zero phase progression efficiently characterized the device and provided principles for designing similar devices at other wavelengths.

  18. Statistical Compression for Climate Model Output

    NASA Astrophysics Data System (ADS)

    Hammerling, D.; Guinness, J.; Soh, Y. J.

    2017-12-01

    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.

  19. Cartesian-Grid Simulations of a Canard-Controlled Missile with a Free-Spinning Tail

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The proposed paper presents a series of simulations of a geometrically complex, canard-controlled, supersonic missile with free-spinning tail fins. Time-dependent simulations were performed using an inviscid Cartesian-grid-based method with results compared to both experimental data and high-resolution Navier-Stokes computations. At fixed free stream conditions and canard deflections, the tail spin rate was iteratively determined such that the net rolling moment on the empennage is zero. This rate corresponds to the time-asymptotic rate of the free-to-spin fin system. After obtaining spin-averaged aerodynamic coefficients for the missile, the investigation seeks a fixed-tail approximation to the spin-averaged aerodynamic coefficients, and examines the validity of this approximation over a variety of freestream conditions.

  20. Results of a joint NOAA/NASA sounder simulation study

    NASA Technical Reports Server (NTRS)

    Phillips, N.; Susskind, Joel; Mcmillin, L.

    1988-01-01

    This paper presents the results of a joint NOAA and NASA sounder simulation study in which the accuracies of atmospheric temperature profiles and surface skin temperature measuremnents retrieved from two sounders were compared: (1) the currently used IR temperature sounder HIRS2 (High-resolution Infrared Radiation Sounder 2); and (2) the recently proposed high-spectral-resolution IR sounder AMTS (Advanced Moisture and Temperature Sounder). Simulations were conducted for both clear and partial cloud conditions. Data were analyzed at NASA using a physical inversion technique and at NOAA using a statistical technique. Results show significant improvement of AMTS compared to HIRS2 for both clear and cloudy conditions. The improvements are indicated by both methods of data analysis, but the physical retrievals outperform the statistical retrievals.

  1. Elevated temperature crack growth

    NASA Technical Reports Server (NTRS)

    Malik, S. N.; Vanstone, R. H.; Kim, K. S.; Laflen, J. H.

    1985-01-01

    The purpose is to determine the ability of currently available P-I integrals to correlate fatigue crack propagation under conditions that simulate the turbojet engine combustor liner environment. The utility of advanced fracture mechanics measurements will also be evaluated during the course of the program. To date, an appropriate specimen design, a crack displacement measurement method, and boundary condition simulation in the computational model of the specimen were achieved. Alloy 718 was selected as an analog material based on its ability to simulate high temperature behavior at lower temperatures. Tensile and cyclic tests were run at several strain rates so that an appropriate constitutive model could be developed. Suitable P-I integrals were programmed into a finite element post-processor for eventual comparison with experimental data.

  2. Method for Prediction of the Power Output from Photovoltaic Power Plant under Actual Operating Conditions

    NASA Astrophysics Data System (ADS)

    Obukhov, S. G.; Plotnikov, I. A.; Surzhikova, O. A.; Savkin, K. D.

    2017-04-01

    Solar photovoltaic technology is one of the most rapidly growing renewable sources of electricity that has practical application in various fields of human activity due to its high availability, huge potential and environmental compatibility. The original simulation model of the photovoltaic power plant has been developed to simulate and investigate the plant operating modes under actual operating conditions. The proposed model considers the impact of the external climatic factors on the solar panel energy characteristics that improves accuracy in the power output prediction. The data obtained through the photovoltaic power plant operation simulation enable a well-reasoned choice of the required capacity for storage devices and determination of the rational algorithms to control the energy complex.

  3. An optimized data fusion method and its application to improve lateral boundary conditions in winter for Pearl River Delta regional PM2.5 modeling, China

    NASA Astrophysics Data System (ADS)

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran

    2018-05-01

    Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.

  4. A new CFD based non-invasive method for functional diagnosis of coronary stenosis.

    PubMed

    Xie, Xinzhou; Zheng, Minwen; Wen, Didi; Li, Yabing; Xie, Songyun

    2018-03-22

    Accurate functional diagnosis of coronary stenosis is vital for decision making in coronary revascularization. With recent advances in computational fluid dynamics (CFD), fractional flow reserve (FFR) can be derived non-invasively from coronary computed tomography angiography images (FFR CT ) for functional measurement of stenosis. However, the accuracy of FFR CT is limited due to the approximate modeling approach of maximal hyperemia conditions. To overcome this problem, a new CFD based non-invasive method is proposed. Instead of modeling maximal hyperemia condition, a series of boundary conditions are specified and those simulated results are combined to provide a pressure-flow curve for a stenosis. Then, functional diagnosis of stenosis is assessed based on parameters derived from the obtained pressure-flow curve. The proposed method is applied to both idealized and patient-specific models, and validated with invasive FFR in six patients. Results show that additional hemodynamic information about the flow resistances of a stenosis is provided, which cannot be directly obtained from anatomy information. Parameters derived from the simulated pressure-flow curve show a linear and significant correlations with invasive FFR (r > 0.95, P < 0.05). The proposed method can assess flow resistances by the pressure-flow curve derived parameters without modeling of maximal hyperemia condition, which is a new promising approach for non-invasive functional assessment of coronary stenosis.

  5. Signal-Detection Analyses of Conditional Discrimination and Delayed Matching-to-Sample Performance

    ERIC Educational Resources Information Center

    Alsop, Brent

    2004-01-01

    Quantitative analyses of stimulus control and reinforcer control in conditional discriminations and delayed matching-to-sample procedures often encounter a problem; it is not clear how to analyze data when subjects have not made errors. The present article examines two common methods for overcoming this problem. Monte Carlo simulations of…

  6. Over/Undervoltage and undervoltage shift of hybrid islanding detection method of distributed generation.

    PubMed

    Yingram, Manop; Premrudeepreechacharn, Suttichai

    2015-01-01

    The mainly used local islanding detection methods may be classified as active and passive methods. Passive methods do not perturb the system but they have larger nondetection zones, whereas active methods have smaller nondetection zones but they perturb the system. In this paper, a new hybrid method is proposed to solve this problem. An over/undervoltage (passive method) has been used to initiate an undervoltage shift (active method), which changes the undervoltage shift of inverter, when the passive method cannot have a clear discrimination between islanding and other events in the system. Simulation results on MATLAB/SIMULINK show that over/undervoltage and undervoltage shifts of hybrid islanding detection method are very effective because they can determine anti-islanding condition very fast. ΔP/P > 38.41% could determine anti-islanding condition within 0.04 s; ΔP/P < -24.39% could determine anti-islanding condition within 0.04 s; -24.39% ≤ ΔP/P ≤ 38.41% could determine anti-islanding condition within 0.08 s. This method perturbed the system, only in the case of -24.39% ≤ ΔP/P ≤ 38.41% at which the control system of inverter injected a signal of undervoltage shift as necessary to check if the occurrence condition was an islanding condition or not.

  7. Two-scale homogenization to determine effective parameters of thin metallic-structured films

    PubMed Central

    Marigo, Jean-Jacques

    2016-01-01

    We present a homogenization method based on matched asymptotic expansion technique to derive effective transmission conditions of thin structured films. The method leads unambiguously to effective parameters of the interface which define jump conditions or boundary conditions at an equivalent zero thickness interface. The homogenized interface model is presented in the context of electromagnetic waves for metallic inclusions associated with Neumann or Dirichlet boundary conditions for transverse electric or transverse magnetic wave polarization. By comparison with full-wave simulations, the model is shown to be valid for thin interfaces up to thicknesses close to the wavelength. We also compare our effective conditions with the two-sided impedance conditions obtained in transmission line theory and to the so-called generalized sheet transition conditions. PMID:27616916

  8. Investigation of prescribed movement in fluid–structure interaction simulation for the human phonation process☆

    PubMed Central

    Zörner, S.; Kaltenbacher, M.; Döllinger, M.

    2013-01-01

    In a partitioned approach for computational fluid–structure interaction (FSI) the coupling between fluid and structure causes substantial computational resources. Therefore, a convenient alternative is to reduce the problem to a pure flow simulation with preset movement and applying appropriate boundary conditions. This work investigates the impact of replacing the fully-coupled interface condition with a one-way coupling. To continue to capture structural movement and its effect onto the flow field, prescribed wall movements from separate simulations and/or measurements are used. As an appropriate test case, we apply the different coupling strategies to the human phonation process, which is a highly complex interaction of airflow through the larynx and structural vibration of the vocal folds (VF). We obtain vocal fold vibrations from a fully-coupled simulation and use them as input data for the simplified simulation, i.e. just solving the fluid flow. All computations are performed with our research code CFS++, which is based on the finite element (FE) method. The presented results show that a pure fluid simulation with prescribed structural movement can substitute the fully-coupled approach. However, caution must be used to ensure accurate boundary conditions on the interface, and we found that only a pressure driven flow correctly responds to the physical effects when using specified motion. PMID:24204083

  9. COMSOL-Based Modeling and Simulation of SnO2/rGO Gas Sensor for Detection of NO2.

    PubMed

    Yaghouti Niyat, Farshad; Shahrokh Abadi, M H

    2018-02-01

    Despite SIESTA and COMSOL being increasingly used for the simulation of the sensing mechanism in the gas sensors, there are no modeling and simulation reports in literature for detection of NO 2 based rGO/SnO 2 sensors. In the present study, we model, simulate, and characterize an NO 2 based rGO/SnO 2 gas sensor using COMSOL by solving the Poisson's equations under associated boundary conditions of mass, heat and electrical transitions. To perform the simulation, we use an exposure model for presenting the required NO 2 , a heat transfer model to obtain a reaction temperature, and an electrical model to characterize the sensor's response in the presence of the gas. We characterize the sensor's response in the presence of different concentrations of NO 2 at different working temperatures and compare the results with the experimental data, reported by Zhang et al. The results from the simulated sensor show a good agreement with the real sensor with some inconsistencies due to differences between the practical conditions in the real chamber and applied conditions to the analytical equations. The results also show that the method can be used to define and predict the behavior of the rGO-based gas sensors before undergoing the fabrication process.

  10. Permeability Sensitivity Functions and Rapid Simulation of Hydraulic-Testing Measurements Using Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Escobar Gómez, J. D.; Torres-Verdín, C.

    2018-03-01

    Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.

  11. Workload Influence on Fatigue Related Psychological and Physiological Performance Changes of Aviators

    PubMed Central

    Liu, Xi-Wen; Bian, Ka; Wen, Zhi-Hong; Li, Xiao-Jing; Zhang, Zuo-Ming; Hu, Wen-Dong

    2014-01-01

    Objective We evaluated a variety of non-invasive physiological technologies and a series of test approaches for examination of aviator performances under conditions of mental workload in order to provide a standard real-time test for physiological and psychological pilot fatigue assessments. Methods Twenty-one male aviators were selected for a simulated flight in a hypobaric cabin with artificial altitude conditions of 2400 meter above sea level. The simulated flight lasted for 1.5 h, and was repeated for two times with an intervening 0.5 h rest period outside the hypobaric cabin. Subjective criteria (a fatigue assessment instrument [FAI]) and objective criteria (a standing-position balance test as well as a critical flicker fusion frequency (CFF) test) were used for fatigue evaluations. Results No significant change was observed in the FAI scores before and after the simulated flight, indicating that there was no subjective fatigue feeling among the participants. However, significant differences were observed in the standing-position balance and CFF tests among the subjects, suggesting that psychophysiological indexes can reflect mental changes caused by workload to a certain extent. The CFF test was the simplest and clearly indicated the occurrence of workload influences on pilot performances after a simulated flight. Conclusions Results showed that the CFF test was the easiest way to detect workload caused mental changes after a simulated flight in a hypobaric cabin and reflected the psychophysiological state of aviators. We suggest that this test might be used as an effective routine method for evaluating the workload influences on mental conditions of aviators. PMID:24505277

  12. Modelling of countermeasures for AFV protection against IR SACLOS systems

    NASA Astrophysics Data System (ADS)

    Walmsley, R.; Butters, B.; Ayling, R.; Richardson, M.

    2005-11-01

    Countermeasures consisting of obscurants and decoys can be used separately or in combination in attempting to defeat an attack on an Armoured Fighting Vehicle (AFV) by an IR SACLOS missile system. The engagement can occur over a wide range of conditions of wind speed, wind direction and the AFV route relative to the SACLOS firing post. The countermeasures need to be evaluated over the full set of conditions. Simulation with a man in the loop can be expensive and very time consuming. Without using a man in the loop, a fully computer based simulation can be used to identify the scenarios in which defeat of the SACLOS system may be possible. These instances can be examined in more detail using the same simulation application or by using the conditions in a more detailed modelling and simulation facility. An IR imaging tracker is used instead of the man in the loop to simulate the SACLOS operator. The missile is guided onto the target by either the clear view of the AFV or by the AFV position predicted by the tracker while the AFV is obscured. The modelled scenarios feature a typical AFV modelled as a 3D object with a nominal 8 -12 μm signature. The modelled obscurant munitions are hypothetical but based on achievable designs based on current obscurant material performance and dissemination methods. Some general results and conclusions about the method are presented with a view of further work and the use of decoys with the obscurant to present a reappearing alternative target.

  13. Assessing the contribution of different factors in RegCM4.3 regional climate model projections using the Factor Separation method over the Med-CORDEX domain

    NASA Astrophysics Data System (ADS)

    Zsolt Torma, Csaba; Giorgi, Filippo

    2014-05-01

    A set of regional climate model (RCM) simulations applying dynamical downscaling of global climate model (GCM) simulations over the Mediterranean domain specified by the international initiative Coordinated Regional Downscaling Experiment (CORDEX) were completed with the Regional Climate Model RegCM, version RegCM4.3. Two GCMs were selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble to provide the driving fields for the RegCM: HadGEM2-ES (HadGEM) and MPI-ESM-MR (MPI). The simulations consist of an ensemble including multiple physics configurations and different "Reference Concentration Pathways" (RCP4.5 and RCP8.5). In total 15 simulations were carried out with 7 model physics configurations with varying convection and land surface schemes. The horizontal grid spacing of the RCM simulations is 50 km and the simulated period in all cases is 1970-2100 (1970-2099 in case of HadGEM driven simulations). This ensemble includes a combination of experiments in which different model components are changed individually and in combination, and thus lends itself optimally to the application of the Factor Separation (FS) method. This study applies the FS method to investigate the contributions of different factors, along with their synergy, on a set of regional climate model (RCM) projections for the Mediterranean region. The FS method is applied to 6 projections for the period 1970-2100 performed with the regional model RegCM4.3 over the Med-CORDEX domain. Two different sets of factors are intercompared, namely the driving global climate model (HadGEM and MPI) boundary conditions against two model physics settings (convection scheme and irrigation). We find that both the GCM driving conditions and the model physics provide important contributions, depending on the variable analyzed (surface air temperature and precipitation), season (winter vs. summer) and time horizon into the future, while the synergy term mostly tends to counterbalance the contributions of the individual factors. We demonstrate the usefulness of the FS method to assess different sources of uncertainty in RCM-based regional climate projections.

  14. Reliability computation using fault tree analysis

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.

  15. Simulated microgravity facilitates cell migration and neuroprotection after bone marrow stromal cell transplantation in spinal cord injury

    PubMed Central

    2013-01-01

    Introduction Recently, cell-based therapy has gained significant attention for the treatment of central nervous system diseases. Although bone marrow stromal cells (BMSCs) are considered to have good engraftment potential, challenges due to in vitro culturing, such as a decline in their functional potency, have been reported. Here, we investigated the efficacy of rat BMSCs (rBMSCs) cultured under simulated microgravity conditions, for transplantation into a rat model of spinal cord injury (SCI). Methods rBMSCs were cultured under two different conditions: standard gravity (1G) and simulated microgravity attained by using the 3D-clinostat. After 7 days of culture, the rBMSCs were analyzed morphologically, with RT-PCR and immunostaining, and were used for grafting. Adult rats were used for constructing SCI models by using a weight-dropping method and were grouped into three experimental groups for comparison. rBMSCs cultured under 1 g and simulated microgravity were transplanted intravenously immediately after SCI. We evaluated the hindlimb functional improvement for 3 weeks. Tissue repair after SCI was examined by calculating the cavity area ratio and immunohistochemistry. Results rBMSCs cultured under simulated microgravity expressed Oct-4 and CXCR4, in contrast to those cultured under 1 g conditions. Therefore, rBMSCs cultured under simulated microgravity were considered to be in an undifferentiated state and thus to possess high migration ability. After transplantation, grafted rBMSCs cultured under microgravity exhibited greater survival at the periphery of the lesion, and the motor functions of the rats that received these grafts improved significantly compared with the rats that received rBMSCs cultured in 1 g. In addition, rBMSCs cultured under microgravity were thought to have greater trophic effects on reestablishment and survival of host spinal neural tissues because cavity formations were reduced, and apoptosis-inhibiting factor expression was high at the periphery of the SCI lesion. Conclusions Here we show that transplantation of rBMSCs cultured under simulated microgravity facilitates functional recovery from SCI rather than those cultured under 1 g conditions. PMID:23548163

  16. Determination of Matric Suction and Saturation Degree for Unsaturated Soils, Comparative Study - Numerical Method versus Analytical Method

    NASA Astrophysics Data System (ADS)

    Chiorean, Vasile-Florin

    2017-10-01

    Matric suction is a soil parameter which influences the behaviour of unsaturated soils in both terms of shear strength and permeability. It is a necessary aspect to know the variation of matric suction in unsaturated soil zone for solving geotechnical issues like unsaturated soil slopes stability or bearing capacity for unsaturated foundation ground. Mathematical expression of the dependency between soil moisture content and it’s matric suction (soil water characteristic curve) has a powerful character of nonlinearity. This paper presents two methods to determine the variation of matric suction along the depth included between groundwater level and soil level. First method is an analytical approach to emphasize one direction steady state unsaturated infiltration phenomenon that occurs between the groundwater level and the soil level. There were simulated three different situations in terms of border conditions: precipitations (inflow conditions on ground surface), evaporation (outflow conditions on ground surface), and perfect equilibrium (no flow on ground surface). Numerical method is finite element method used for steady state, two-dimensional, unsaturated infiltration calculus. Regarding boundary conditions there were simulated identical situations as in analytical approach. For both methods, was adopted the equation proposed by van Genuchten-Mualen (1980) for mathematical expression of soil water characteristic curve. Also for the unsaturated soil permeability prediction model was adopted the equation proposed by van Genuchten-Mualen. The fitting parameters of these models were adopted according to RETC 6.02 software in function of soil type. The analyses were performed in both methods for three major soil types: clay, silt and sand. For each soil type were concluded analyses for three situations in terms of border conditions applied on soil surface: inflow, outflow, and no flow. The obtained results are presented in order to highlight the differences/similarities between the methods and the advantages / disadvantages of each one.

  17. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  18. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  19. Simulation of Triple Oxidation Ditch Wastewater Treatment Process

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Zhang, Jinsong; Liu, Lixiang; Hu, Yongfeng; Xu, Ziming

    2010-11-01

    This paper presented the modeling mechanism and method of a sewage treatment system. A triple oxidation ditch process of a WWTP was simulated based on activated sludge model ASM2D with GPS-X software. In order to identify the adequate model structure to be implemented into the GPS-X environment, the oxidation ditch was divided into several completely stirred tank reactors depended on the distribution of aeration devices and dissolved oxygen concentration. The removal efficiency of COD, ammonia nitrogen, total nitrogen, total phosphorus and SS were simulated by GPS-X software with influent quality data of this WWTP from June to August 2009, to investigate the differences between the simulated results and the actual results. The results showed that, the simulated values could well reflect the actual condition of the triple oxidation ditch process. Mathematical modeling method was appropriate in effluent quality predicting and process optimizing.

  20. Particle-In-Cell simulations of high pressure plasmas using graphics processing units

    NASA Astrophysics Data System (ADS)

    Gebhardt, Markus; Atteln, Frank; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Mertmann, Philipp; Awakowicz, Peter

    2009-10-01

    Particle-In-Cell (PIC) simulations are widely used to understand the fundamental phenomena in low-temperature plasmas. Particularly plasmas at very low gas pressures are studied using PIC methods. The inherent drawback of these methods is that they are very time consuming -- certain stability conditions has to be satisfied. This holds even more for the PIC simulation of high pressure plasmas due to the very high collision rates. The simulations take up to very much time to run on standard computers and require the help of computer clusters or super computers. Recent advances in the field of graphics processing units (GPUs) provides every personal computer with a highly parallel multi processor architecture for very little money. This architecture is freely programmable and can be used to implement a wide class of problems. In this paper we present the concepts of a fully parallel PIC simulation of high pressure plasmas using the benefits of GPU programming.

  1. Use of the forest vegetation simulator to quantify disturbance activities in state and transition models

    Treesearch

    Reuben Weisz; Don Vandendriesche

    2012-01-01

    The Forest Vegetation Simulator (FVS) has been used to provide rates of natural growth transitions under endemic conditions for use in State and Transition Models (STMs). This process has previously been presented. This paper expands on that work by citing the methods used to capture resultant vegetation states following disturbance activities; be it of natural causes...

  2. Simulation of Electric Propulsion Thrusters (Preprint)

    DTIC Science & Technology

    2011-02-07

    activity concerns the plumes produced by electric thrusters. Detailed information on the plumes is required for safe integration of the thruster...ground-based laboratory facilities. Device modelling also plays an important role in plume simulations by providing accurate boundary conditions at...methods used to model the flow of gas and plasma through electric propulsion devices. Discussion of the numerical analysis of other aspects of

  3. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-01

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  4. High-resolution surface analysis for extended-range downscaling with limited-area atmospheric models

    NASA Astrophysics Data System (ADS)

    Separovic, Leo; Husain, Syed Zahid; Yu, Wei; Fernig, David

    2014-12-01

    High-resolution limited-area model (LAM) simulations are frequently employed to downscale coarse-resolution objective analyses over a specified area of the globe using high-resolution computational grids. When LAMs are integrated over extended time frames, from months to years, they are prone to deviations in land surface variables that can be harmful to the quality of the simulated near-surface fields. Nudging of the prognostic surface fields toward a reference-gridded data set is therefore devised in order to prevent the atmospheric model from diverging from the expected values. This paper presents a method to generate high-resolution analyses of land-surface variables, such as surface canopy temperature, soil moisture, and snow conditions, to be used for the relaxation of lower boundary conditions in extended-range LAM simulations. The proposed method is based on performing offline simulations with an external surface model, forced with the near-surface meteorological fields derived from short-range forecast, operational analyses, and observed temperatures and humidity. Results show that the outputs of the surface model obtained in the present study have potential to improve the near-surface atmospheric fields in extended-range LAM integrations.

  5. Mean Line Pump Flow Model in Rocket Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Lavelle, Thomas M.

    2000-01-01

    A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.

  6. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.

    PubMed

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-28

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  7. Technical Note: A minimally invasive experimental system for pCO2 manipulation in plankton cultures using passive gas exchange (atmospheric carbon control simulator)

    NASA Astrophysics Data System (ADS)

    Love, Brooke A.; Olson, M. Brady; Wuori, Tristen

    2017-05-01

    As research into the biotic effects of ocean acidification has increased, the methods for simulating these environmental changes in the laboratory have multiplied. Here we describe the atmospheric carbon control simulator (ACCS) for the maintenance of plankton under controlled pCO2 conditions, designed for species sensitive to the physical disturbance introduced by the bubbling of cultures and for studies involving trophic interaction. The system consists of gas mixing and equilibration components coupled with large-volume atmospheric simulation chambers. These chambers allow gas exchange to counteract the changes in carbonate chemistry induced by the metabolic activity of the organisms. The system is relatively low cost, very flexible, and when used in conjunction with semi-continuous culture methods, it increases the density of organisms kept under realistic conditions, increases the allowable time interval between dilutions, and/or decreases the metabolically driven change in carbonate chemistry during these intervals. It accommodates a large number of culture vessels, which facilitate multi-trophic level studies and allow the tracking of variable responses within and across plankton populations to ocean acidification. It also includes components that increase the reliability of gas mixing systems using mass flow controllers.

  8. Smoothed Particle Hydrodynamics: A consistent model for interfacial multiphase fluid flow simulations

    NASA Astrophysics Data System (ADS)

    Krimi, Abdelkader; Rezoug, Mehdi; Khelladi, Sofiane; Nogueira, Xesús; Deligant, Michael; Ramírez, Luis

    2018-04-01

    In this work, a consistent Smoothed Particle Hydrodynamics (SPH) model to deal with interfacial multiphase fluid flows simulation is proposed. A modification to the Continuum Stress Surface formulation (CSS) [1] to enhance the stability near the fluid interface is developed in the framework of the SPH method. A non-conservative first-order consistency operator is used to compute the divergence of stress surface tensor. This formulation benefits of all the advantages of the one proposed by Adami et al. [2] and, in addition, it can be applied to more than two phases fluid flow simulations. Moreover, the generalized wall boundary conditions [3] are modified in order to be well adapted to multiphase fluid flows with different density and viscosity. In order to allow the application of this technique to wall-bounded multiphase flows, a modification of generalized wall boundary conditions is presented here for using the SPH method. In this work we also present a particle redistribution strategy as an extension of the damping technique presented in [3] to smooth the initial transient phase of gravitational multiphase fluid flow simulations. Several computational tests are investigated to show the accuracy, convergence and applicability of the proposed SPH interfacial multiphase model.

  9. Transport dissipative particle dynamics model for mesoscopic advection- diffusion-reaction problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, Li; Yazdani, Alireza; Tartakovsky, Alexandre M.

    2015-07-07

    We present a transport dissipative particle dynamics (tDPD) model for simulating mesoscopic problems involving advection-diffusion-reaction (ADR) processes, along with a methodology for implementation of the correct Dirichlet and Neumann boundary conditions in tDPD simulations. tDPD is an extension of the classic DPD framework with extra variables for describing the evolution of concentration fields. The transport of concentration is modeled by a Fickian flux and a random flux between particles, and an analytical formula is proposed to relate the mesoscopic concentration friction to the effective diffusion coefficient. To validate the present tDPD model and the boundary conditions, we perform three tDPDmore » simulations of one-dimensional diffusion with different boundary conditions, and the results show excellent agreement with the theoretical solutions. We also performed two-dimensional simulations of ADR systems and the tDPD simulations agree well with the results obtained by the spectral element method. Finally, we present an application of the tDPD model to the dynamic process of blood coagulation involving 25 reacting species in order to demonstrate the potential of tDPD in simulating biological dynamics at the mesoscale. We find that the tDPD solution of this comprehensive 25-species coagulation model is only twice as computationally expensive as the DPD simulation of the hydrodynamics only, which is a significant advantage over available continuum solvers.« less

  10. Numerical Computation of Electric Field and Potential Along Silicone Rubber Insulators Under Contaminated and Dry Band Conditions

    NASA Astrophysics Data System (ADS)

    Arshad; Nekahi, A.; McMeekin, S. G.; Farzaneh, M.

    2016-09-01

    Electrical field distribution along the insulator surface is considered one of the important parameters for the performance evaluation of outdoor insulators. In this paper numerical simulations were carried out to investigate the electric field and potential distribution along silicone rubber insulators under various polluted and dry band conditions. Simulations were performed using commercially available simulation package Comsol Multiphysics based on the finite element method. Various pollution severity levels were simulated by changing the conductivity of pollution layer. Dry bands of 2 cm width were inserted at the high voltage end, ground end, middle part, shed, sheath, and at the junction of shed and sheath to investigate the effect of dry band location and width on electric field and potential distribution. Partial pollution conditions were simulated by applying pollution layer on the top and bottom surface respectively. It was observed from the simulation results that electric field intensity was higher at the metal electrode ends and at the junction of dry bands. Simulation results showed that potential distribution is nonlinear in the case of clean and partially polluted insulator and linear for uniform pollution layer. Dry band formation effect both potential and electric field distribution. Power dissipated along the insulator surface and the resultant heat generation was also studied. The results of this study could be useful in the selection of polymeric insulators for contaminated environments.

  11. Evaluation of the Inertial Response of Variable-Speed Wind Turbines Using Advanced Simulation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholbrock, Andrew K; Muljadi, Eduard; Gevorgian, Vahan

    In this paper, we focus on the temporary frequency support effect provided by wind turbine generators (WTGs) through the inertial response. With the implemented inertial control methods, the WTG is capable of increasing its active power output by releasing parts of the stored kinetic energy when the frequency excursion occurs. The active power can be boosted temporarily above the maximum power points, but the rotor speed deceleration follows and an active power output deficiency occurs during the restoration of rotor kinetic energy. In this paper, we evaluate and compare the inertial response induced by two distinct inertial control methods usingmore » advanced simulation. In the first stage, the proposed inertial control methods are analyzed in offline simulation. Using an advanced wind turbine simulation program, FAST with TurbSim, the response of the researched wind turbine is comprehensively evaluated under turbulent wind conditions, and the impact on the turbine mechanical components are assessed. In the second stage, the inertial control is deployed on a real 600-kW wind turbine, the three-bladed Controls Advanced Research Turbine, which further verifies the inertial control through a hardware-in-the-loop simulation. Various inertial control methods can be effectively evaluated based on the proposed two-stage simulation platform, which combines the offline simulation and real-time hardware-in-the-loop simulation. The simulation results also provide insights in designing inertial control for WTGs.« less

  12. Continuous estimation of evapotranspiration and gross primary productivity from an Unmanned Aerial System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Bandini, F.; Jakobsen, J.; J Zarco-Tejada, P.; Liu, X.; Haugård Olesen, D.; Ibrom, A.; Bauer-Gottwein, P.; Garcia, M.

    2017-12-01

    Model prediction of evapotranspiration (ET) and gross primary productivity (GPP) using optical and thermal satellite imagery is biased towards clear-sky conditions. Unmanned Aerial Systems (UAS) can collect optical and thermal signals at unprecedented very high spatial resolution (< 1 meter) under sunny and cloudy weather conditions. However, methods to obtain model outputs between image acquisitions are still needed. This study uses UAS based optical and thermal observations to continuously estimate daily ET and GPP in a Danish willow forest for an entire growing season of 2016. A hexacopter equipped with multispectral and thermal infrared cameras and a real-time kinematic Global Navigation Satellite System was used. The Normalized Differential Vegetation Index (NDVI) and the Temperature Vegetation Dryness Index (TVDI) were used as proxies for leaf area index and soil moisture conditions, respectively. To obtain continuously daily records between UAS acquisitions, UAS surface temperature was assimilated by the ensemble Kalman filter into a prognostic land surface model (Noilhan and Planton, 1989), which relies on the force-restore method, to simulate the continuous land surface temperature. NDVI was interpolated into daily time steps by the cubic spline method. Using these continuous datasets, a joint ET and GPP model, which combines the Priestley-Taylor Jet Propulsion Laboratory ET model (Fisher et al., 2008; Garcia et al., 2013) and the Light Use Efficiency GPP model (Potter et al., 1993), was applied. The simulated ET and GPP were compared with the footprint of eddy covariance observations. The simulated daily ET has a RMSE of 14.41 W•m-2 and a correlation coefficient of 0.83. The simulated daily GPP has a root mean square error (RMSE) of 1.56 g•C•m-2•d-1 and a correlation coefficient of 0.87. This study demonstrates the potential of UAS based multispectral and thermal mapping to continuously estimate ET and GPP for both sunny and cloudy weather conditions.

  13. RuleMonkey: software for stochastic simulation of rule-based models

    PubMed Central

    2010-01-01

    Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models. PMID:20673321

  14. Discrete Molecular Dynamics Approach to the Study of Disordered and Aggregating Proteins.

    PubMed

    Emperador, Agustí; Orozco, Modesto

    2017-03-14

    We present a refinement of the Coarse Grained PACSAB force field for Discrete Molecular Dynamics (DMD) simulations of proteins in aqueous conditions. As the original version, the refined method provides good representation of the structure and dynamics of folded proteins but provides much better representations of a variety of unfolded proteins, including some very large, impossible to analyze by atomistic simulation methods. The PACSAB/DMD method also reproduces accurately aggregation properties, providing good pictures of the structural ensembles of proteins showing a folded core and an intrinsically disordered region. The combination of accuracy and speed makes the method presented here a good alternative for the exploration of unstructured protein systems.

  15. Efficient Voronoi volume estimation for DEM simulations of granular materials under confined conditions

    PubMed Central

    Frenning, Göran

    2015-01-01

    When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975

  16. Estimating current and future streamflow characteristics at ungaged sites, central and eastern Montana, with application to evaluating effects of climate change on fish populations

    USGS Publications Warehouse

    Sando, Roy; Chase, Katherine J.

    2017-03-23

    A common statistical procedure for estimating streamflow statistics at ungaged locations is to develop a relational model between streamflow and drainage basin characteristics at gaged locations using least squares regression analysis; however, least squares regression methods are parametric and make constraining assumptions about the data distribution. The random forest regression method provides an alternative nonparametric method for estimating streamflow characteristics at ungaged sites and requires that the data meet fewer statistical conditions than least squares regression methods.Random forest regression analysis was used to develop predictive models for 89 streamflow characteristics using Precipitation-Runoff Modeling System simulated streamflow data and drainage basin characteristics at 179 sites in central and eastern Montana. The predictive models were developed from streamflow data simulated for current (baseline, water years 1982–99) conditions and three future periods (water years 2021–38, 2046–63, and 2071–88) under three different climate-change scenarios. These predictive models were then used to predict streamflow characteristics for baseline conditions and three future periods at 1,707 fish sampling sites in central and eastern Montana. The average root mean square error for all predictive models was about 50 percent. When streamflow predictions at 23 fish sampling sites were compared to nearby locations with simulated data, the mean relative percent difference was about 43 percent. When predictions were compared to streamflow data recorded at 21 U.S. Geological Survey streamflow-gaging stations outside of the calibration basins, the average mean absolute percent error was about 73 percent.

  17. An Integrated Approach to Swept Wing Icing Simulation

    NASA Technical Reports Server (NTRS)

    Potapczuk, Mark G.; Broeren, Andy P.

    2017-01-01

    This presentation describes the various elements of a simulation approach used to develop a database of ice shape geometries and the resulting aerodynamic performance data for a representative commercial transport wing model exposed to a variety of icing conditions. Methods for capturing full three-dimensional ice shape geometries, geometry interpolation along the span of the wing, and creation of artificial ice shapes based upon that geometric data were developed for this effort. The icing conditions used for this effort were representative of actual ice shape encounter scenarios and run the gamut from ice roughness to full three-dimensional scalloped ice shapes.

  18. Numerical simulations of the first operational conditions of the negative ion test facility SPIDER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serianni, G., E-mail: gianluigi.serianni@igi.cnr.it; Agostinetti, P.; Antoni, V.

    2016-02-15

    In view of the realization of the negative ion beam injectors for ITER, a test facility, named SPIDER, is under construction in Padova (Italy) to study and optimize production and extraction of negative ions. The present paper is devoted to the analysis of the expected first operations of SPIDER in terms of single-beamlet and multiple-beamlet simulations of the hydrogen beam optics in various operational conditions. The effectiveness of the methods adopted to compensate for the magnetic deflection of the particles is also assessed. Indications for a sequence of the experimental activities are obtained.

  19. Numerical simulations of the first operational conditions of the negative ion test facility SPIDER

    NASA Astrophysics Data System (ADS)

    Serianni, G.; Agostinetti, P.; Antoni, V.; Baltador, C.; Cavenago, M.; Chitarin, G.; Marconato, N.; Pasqualotto, R.; Sartori, E.; Toigo, V.; Veltri, P.

    2016-02-01

    In view of the realization of the negative ion beam injectors for ITER, a test facility, named SPIDER, is under construction in Padova (Italy) to study and optimize production and extraction of negative ions. The present paper is devoted to the analysis of the expected first operations of SPIDER in terms of single-beamlet and multiple-beamlet simulations of the hydrogen beam optics in various operational conditions. The effectiveness of the methods adopted to compensate for the magnetic deflection of the particles is also assessed. Indications for a sequence of the experimental activities are obtained.

  20. Topological analysis of nuclear pasta phases

    NASA Astrophysics Data System (ADS)

    Kycia, Radosław A.; Kubis, Sebastian; Wójcik, Włodzimierz

    2017-08-01

    In this article the analysis of the result of numerical simulations of pasta phases using algebraic topology methods is presented. These considerations suggest that some phases can be further split into subphases and therefore should be more refined in numerical simulations. The results presented in this article can also be used to relate the Euler characteristic from numerical simulations to the geometry of the phases. The Betti numbers are used as they provide finer characterization of the phases. It is also shown that different boundary conditions give different outcomes.

  1. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  2. Separate versus Concurrent Calibration Methods in Vertical Scaling.

    ERIC Educational Resources Information Center

    Karkee, Thakur; Lewis, Daniel M.; Hoskens, Machteld; Yao, Lihua; Haug, Carolyn

    Two methods to establish a common scale across grades within a content area using a common item design (separate and concurrent) have previously been studied under simulated conditions. Separate estimation is accomplished through separate calibration and grade-by-grade chained linking. Concurrent calibration established the vertical scale in a…

  3. Patient-Specific Simulations of Reactivity in Models of the Pulmonary Vasculature: A 3-D Numerical Study with Fluid-Structure Interaction

    NASA Astrophysics Data System (ADS)

    Hunter, Kendall; Zhang, Yanhang; Lanning, Craig

    2005-11-01

    Insight into the progression of pulmonary hypertension may be obtained from thorough study of vascular flow during reactivity testing, an invasive diagnostic procedure which can dramatically alter vascular hemodynamics. Diagnostic imaging methods, however, are limited in their ability to provide extensive data. Here we present detailed flow and wall deformation results from simulations of pulmonary arteries undergoing this procedure. Patient-specific 3-D geometric reconstructions of the first four branches of the pulmonary vasculature were obtained clinically and meshed for use with computational software. Transient simulations in normal and reactive states were obtained from four such models were completed with patient-specific velocity inlet conditions and flow impedance exit conditions. A microstructurally based orthotropic hyperelastic model that simulates pulmonary artery mechanics under normotensive and hypoxic hypertensive conditions treated wall constitutive changes due to pressure reactivity and arterial remodeling. Pressure gradients, velocity fields, arterial deformation, and complete topography of shear stress were obtained. These models provide richer detail of hemodynamics than can be obtained from current imaging techniques, and should allow maximum characterization of vascular function in the clinical situation.

  4. Inverted initial conditions: Exploring the growth of cosmic structure and voids

    DOE PAGES

    Pontzen, Andrew; Roth, Nina; Peiris, Hiranya V.; ...

    2016-05-18

    We introduce and explore “paired” cosmological simulations. A pair consists of an A and B simulation with initial conditions related by the inversion δ A(x,t initial) = –δ B(x,t initial) (underdensities substituted for overdensities and vice versa). We argue that the technique is valuable for improving our understanding of cosmic structure formation. The A and B fields are by definition equally likely draws from ΛCDM initial conditions, and in the linear regime evolve identically up to the overall sign. As nonlinear evolution takes hold, a region that collapses to form a halo in simulation A will tend to expand tomore » create a void in simulation B. Applications include (i) contrasting the growth of A-halos and B-voids to test excursion-set theories of structure formation, (ii) cross-correlating the density field of the A and B universes as a novel test for perturbation theory, and (iii) canceling error terms by averaging power spectra between the two boxes. Furthermore, generalizations of the method to more elaborate field transformations are suggested.« less

  5. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Reynolds, Daniel R.

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  6. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE PAGES

    Gardner, David J.; Reynolds, Daniel R.

    2017-01-05

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  7. Methods for Solving Gas Damping Problems in Perforated Microstructures Using a 2D Finite-Element Solver

    PubMed Central

    Veijola, Timo; Råback, Peter

    2007-01-01

    We present a straightforward method to solve gas damping problems for perforated structures in two dimensions (2D) utilising a Perforation Profile Reynolds (PPR) solver. The PPR equation is an extended Reynolds equation that includes additional terms modelling the leakage flow through the perforations, and variable diffusivity and compressibility profiles. The solution method consists of two phases: 1) determination of the specific admittance profile and relative diffusivity (and relative compressibility) profiles due to the perforation, and 2) solution of the PPR equation with a FEM solver in 2D. Rarefied gas corrections in the slip-flow region are also included. Analytic profiles for circular and square holes with slip conditions are presented in the paper. To verify the method, square perforated dampers with 16–64 holes were simulated with a three-dimensional (3D) Navier-Stokes solver, a homogenised extended Reynolds solver, and a 2D PPR solver. Cases for both translational (in normal to the surfaces) and torsional motion were simulated. The presented method extends the region of accurate simulation of perforated structures to cases where the homogenisation method is inaccurate and the full 3D Navier-Stokes simulation is too time-consuming.

  8. Constant pressure and temperature discrete-time Langevin molecular dynamics

    NASA Astrophysics Data System (ADS)

    Grønbech-Jensen, Niels; Farago, Oded

    2014-11-01

    We present a new and improved method for simultaneous control of temperature and pressure in molecular dynamics simulations with periodic boundary conditions. The thermostat-barostat equations are built on our previously developed stochastic thermostat, which has been shown to provide correct statistical configurational sampling for any time step that yields stable trajectories. Here, we extend the method and develop a set of discrete-time equations of motion for both particle dynamics and system volume in order to seek pressure control that is insensitive to the choice of the numerical time step. The resulting method is simple, practical, and efficient. The method is demonstrated through direct numerical simulations of two characteristic model systems—a one-dimensional particle chain for which exact statistical results can be obtained and used as benchmarks, and a three-dimensional system of Lennard-Jones interacting particles simulated in both solid and liquid phases. The results, which are compared against the method of Kolb and Dünweg [J. Chem. Phys. 111, 4453 (1999)], show that the new method behaves according to the objective, namely that acquired statistical averages and fluctuations of configurational measures are accurate and robust against the chosen time step applied to the simulation.

  9. A Numerical Study on Toppling Failure of a Jointed Rock Slope by Using the Distinct Lattice Spring Model

    NASA Astrophysics Data System (ADS)

    Lian, Ji-Jian; Li, Qin; Deng, Xi-Fei; Zhao, Gao-Feng; Chen, Zu-Yu

    2018-02-01

    In this work, toppling failure of a jointed rock slope is studied by using the distinct lattice spring model (DLSM). The gravity increase method (GIM) with a sub-step loading scheme is implemented in the DLSM to mimic the loading conditions of a centrifuge test. A classical centrifuge test for a jointed rock slope, previously simulated by the finite element method and the discrete element model, is simulated by using the GIM-DLSM. Reasonable boundary conditions are obtained through detailed comparisons among existing numerical solutions with experimental records. With calibrated boundary conditions, the influences of the tensional strength of the rock block, cohesion and friction angles of the joints, as well as the spacing and inclination angles of the joints, on the flexural toppling failure of the jointed rock slope are investigated by using the GIM-DLSM, leading to some insight into evaluating the state of flexural toppling failure for a jointed slope and effectively preventing the flexural toppling failure of jointed rock slopes.

  10. Estimating zero-g flow rates in open channels having capillary pumping vanes

    NASA Astrophysics Data System (ADS)

    Srinivasan, Radhakrishnan

    2003-02-01

    In vane-type surface tension propellant management devices (PMD) commonly used in satellite fuel tanks, the propellant is transported along guiding vanes from a reservoir at the inlet of the device to a sump at the outlet from where it is pumped to the satellite engine. The pressure gradient driving this free-surface flow under zero-gravity (zero-g) conditions is generated by surface tension and is related to the differential curvatures of the propellant-gas interface at the inlet and outlet of the PMD. A new semi-analytical procedure is prescribed for accurately calculating the extremely small fuel flow rates under reasonably idealized conditions. Convergence of the algorithm is demonstrated by detailed numerical calculations. Owing to the substantial cost and the technical hurdles involved in accurately estimating these minuscule flow rates by either direct numerical simulation or by experimental methods which simulate zero-g conditions in the lab, it is expected that the proposed method will be an indispensable tool in the design and operation of satellite fuel tanks.

  11. An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin

    2015-11-01

    Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.

  12. Active illuminated space object imaging and tracking simulation

    NASA Astrophysics Data System (ADS)

    Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu

    2016-10-01

    Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.

  13. Identifying effective connectivity parameters in simulated fMRI: a direct comparison of switching linear dynamic system, stochastic dynamic causal, and multivariate autoregressive models

    PubMed Central

    Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry

    2013-01-01

    The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258

  14. Models and Methods for Adaptive Management of Individual and Team-Based Training Using a Simulator

    NASA Astrophysics Data System (ADS)

    Lisitsyna, L. S.; Smetyuh, N. P.; Golikov, S. P.

    2017-05-01

    Research of adaptive individual and team-based training has been analyzed and helped find out that both in Russia and abroad, individual and team-based training and retraining of AASTM operators usually includes: production training, training of general computer and office equipment skills, simulator training including virtual simulators which use computers to simulate real-world manufacturing situation, and, as a rule, the evaluation of AASTM operators’ knowledge determined by completeness and adequacy of their actions under the simulated conditions. Such approach to training and re-training of AASTM operators stipulates only technical training of operators and testing their knowledge based on assessing their actions in a simulated environment.

  15. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.

    PubMed

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo

    2016-12-13

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.

  16. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes

    PubMed Central

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo

    2016-01-01

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298

  17. On the Performance of TCP Spoofing in Satellite Networks

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph; Allman, Mark

    2001-01-01

    In this paper, we analyze the performance of Transmission Control Protocol (TCP) in a network that consists of both satellite and terrestrial components. One method, proposed by outside research, to improve the performance of data transfers over satellites is to use a performance enhancing proxy often dubbed 'spoofing.' Spoofing involves the transparent splitting of a TCP connection between the source and destination by some entity within the network path. In order to analyze the impact of spoofing, we constructed a simulation suite based around the network simulator ns-2. The simulation reflects a host with a satellite connection to the Internet and allows the option to spoof connections just prior to the satellite. The methodology used in our simulation allows us to analyze spoofing over a large range of file sizes and under various congested conditions, while prior work on this topic has primarily focused on bulk transfers with no congestion. As a result of these simulations, we find that the performance of spoofing is dependent upon a number of conditions.

  18. A numerical approach for simulating fluid structure interaction of flexible thin shells undergoing arbitrarily large deformations in complex domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilmanov, Anvar, E-mail: agilmano@umn.edu; Le, Trung Bao, E-mail: lebao002@umn.edu; Sotiropoulos, Fotis, E-mail: fotis@umn.edu

    We present a new numerical methodology for simulating fluid–structure interaction (FSI) problems involving thin flexible bodies in an incompressible fluid. The FSI algorithm uses the Dirichlet–Neumann partitioning technique. The curvilinear immersed boundary method (CURVIB) is coupled with a rotation-free finite element (FE) model for thin shells enabling the efficient simulation of FSI problems with arbitrarily large deformation. Turbulent flow problems are handled using large-eddy simulation with the dynamic Smagorinsky model in conjunction with a wall model to reconstruct boundary conditions near immersed boundaries. The CURVIB and FE solvers are coupled together on the flexible solid–fluid interfaces where the structural nodalmore » positions, displacements, velocities and loads are calculated and exchanged between the two solvers. Loose and strong coupling FSI schemes are employed enhanced by the Aitken acceleration technique to ensure robust coupling and fast convergence especially for low mass ratio problems. The coupled CURVIB-FE-FSI method is validated by applying it to simulate two FSI problems involving thin flexible structures: 1) vortex-induced vibrations of a cantilever mounted in the wake of a square cylinder at different mass ratios and at low Reynolds number; and 2) the more challenging high Reynolds number problem involving the oscillation of an inverted elastic flag. For both cases the computed results are in excellent agreement with previous numerical simulations and/or experiential measurements. Grid convergence tests/studies are carried out for both the cantilever and inverted flag problems, which show that the CURVIB-FE-FSI method provides their convergence. Finally, the capability of the new methodology in simulations of complex cardiovascular flows is demonstrated by applying it to simulate the FSI of a tri-leaflet, prosthetic heart valve in an anatomic aorta and under physiologic pulsatile conditions.« less

  19. Dynamically Hedging Oil and Currency Futures Using Receding Horizontal Control and Stochastic Programming

    NASA Astrophysics Data System (ADS)

    Cottrell, Paul Edward

    There is a lack of research in the area of hedging future contracts, especially in illiquid or very volatile market conditions. It is important to understand the volatility of the oil and currency markets because reduced fluctuations in these markets could lead to better hedging performance. This study compared different hedging methods by using a hedging error metric, supplementing the Receding Horizontal Control and Stochastic Programming (RHCSP) method by utilizing the London Interbank Offered Rate with the Levy process. The RHCSP hedging method was investigated to determine if improved hedging error was accomplished compared to the Black-Scholes, Leland, and Whalley and Wilmott methods when applied on simulated, oil, and currency futures markets. A modified RHCSP method was also investigated to determine if this method could significantly reduce hedging error under extreme market illiquidity conditions when applied on simulated, oil, and currency futures markets. This quantitative study used chaos theory and emergence for its theoretical foundation. An experimental research method was utilized for this study with a sample size of 506 hedging errors pertaining to historical and simulation data. The historical data were from January 1, 2005 through December 31, 2012. The modified RHCSP method was found to significantly reduce hedging error for the oil and currency market futures by the use of a 2-way ANOVA with a t test and post hoc Tukey test. This study promotes positive social change by identifying better risk controls for investment portfolios and illustrating how to benefit from high volatility in markets. Economists, professional investment managers, and independent investors could benefit from the findings of this study.

  20. Method for Producing Non-Neoplastic, Three Dimensional, Mammalian Tissue and Cell Aggregates Under Microgravity Culture Conditions and the Products Produced Therefrom

    NASA Technical Reports Server (NTRS)

    Goodwin, Thomas J. (Inventor); Wolf, David A. (Inventor); Spaulding, Glenn F. (Inventor); Prewett, Tracey L. (Inventor)

    1996-01-01

    Normal mammalian tissue and the culturing process has been developed for the three groups of organ, structural, and blood tissue. The cells are grown in vitro under microgravity culture conditions and form three dimensional cells aggregates with normal cell function. The microgravity culture conditions may be microgravity or simulated microgravity created in a horizontal rotating wall culture vessel.

  1. Binomial leap methods for simulating stochastic chemical kinetics.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2004-12-01

    This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.

  2. Evaluation of finite difference and FFT-based solutions of the transport of intensity equation.

    PubMed

    Zhang, Hongbo; Zhou, Wen-Jing; Liu, Ying; Leber, Donald; Banerjee, Partha; Basunia, Mahmudunnabi; Poon, Ting-Chung

    2018-01-01

    A finite difference method is proposed for solving the transport of intensity equation. Simulation results show that although slower than fast Fourier transform (FFT)-based methods, finite difference methods are able to reconstruct the phase with better accuracy due to relaxed assumptions for solving the transport of intensity equation relative to FFT methods. Finite difference methods are also more flexible than FFT methods in dealing with different boundary conditions.

  3. Evaluating the Impact of Classroom Education on the Management of Septic Shock Using Human Patient Simulation.

    PubMed

    Lighthall, Geoffrey K; Bahmani, Dona; Gaba, David

    2016-02-01

    Classroom lectures are the mainstay of imparting knowledge in a structured manner and have the additional goals of stimulating critical thinking, lifelong learning, and improvements in patient care. The impact of lectures on patient care is difficult to examine in critical care because of the heterogeneity in patient conditions and personnel as well as confounders such as time pressure, interruptions, fatigue, and nonstandardized observation methods. The critical care environment was recreated in a simulation laboratory using a high-fidelity mannequin simulator, where a mannequin simulator with a standardized script for septic shock was presented to trainees. The reproducibility of this patient and associated conditions allowed the evaluation of "clinical performance" in the management of septic shock. In a previous study, we developed and validated tools for the quantitative analysis of house staff managing septic shock simulations. In the present analysis, we examined whether measures of clinical performance were improved in those cases where a lecture on the management of shock preceded a simulated exercise on the management of septic shock. The administration of the septic shock simulations allowed for performance measurements to be calculated for both medical interns and for subsequent management by a larger resident-led team. The analysis revealed that receiving a lecture on shock before managing a simulated patient with septic shock did not produce scores higher than for those who did not receive the previous lecture. This result was similar for both interns managing the patient and for subsequent management by a resident-led team. We failed to find an immediate impact on clinical performance in simulations of septic shock after a lecture on the management of this syndrome. Lectures are likely not a reliable sole method for improving clinical performance in the management of complex disease processes.

  4. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  5. Conformational sampling enhancement of replica exchange molecular dynamics simulations using swarm particle intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamberaj, Hiqmet, E-mail: hkamberaj@ibu.edu.mk

    In this paper, we present a new method based on swarm particle social intelligence for use in replica exchange molecular dynamics simulations. In this method, the replicas (representing the different system configurations) are allowed communicating with each other through the individual and social knowledge, in additional to considering them as a collection of real particles interacting through the Newtonian forces. The new method is based on the modification of the equations of motion in such way that the replicas are driven towards the global energy minimum. The method was tested for the Lennard-Jones clusters of N = 4,  5, andmore » 6 atoms. Our results showed that the new method is more efficient than the conventional replica exchange method under the same practical conditions. In particular, the new method performed better on optimizing the distribution of the replicas among the thermostats with time and, in addition, ergodic convergence is observed to be faster. We also introduce a weighted histogram analysis method allowing analyzing the data from simulations by combining data from all of the replicas and rigorously removing the inserted bias.« less

  6. Conformational sampling enhancement of replica exchange molecular dynamics simulations using swarm particle intelligence

    NASA Astrophysics Data System (ADS)

    Kamberaj, Hiqmet

    2015-09-01

    In this paper, we present a new method based on swarm particle social intelligence for use in replica exchange molecular dynamics simulations. In this method, the replicas (representing the different system configurations) are allowed communicating with each other through the individual and social knowledge, in additional to considering them as a collection of real particles interacting through the Newtonian forces. The new method is based on the modification of the equations of motion in such way that the replicas are driven towards the global energy minimum. The method was tested for the Lennard-Jones clusters of N = 4, 5, and 6 atoms. Our results showed that the new method is more efficient than the conventional replica exchange method under the same practical conditions. In particular, the new method performed better on optimizing the distribution of the replicas among the thermostats with time and, in addition, ergodic convergence is observed to be faster. We also introduce a weighted histogram analysis method allowing analyzing the data from simulations by combining data from all of the replicas and rigorously removing the inserted bias.

  7. Shock simulations of a single-site coarse-grain RDX model using the dissipative particle dynamics method with reactivity

    NASA Astrophysics Data System (ADS)

    Sellers, Michael S.; Lísal, Martin; Schweigert, Igor; Larentzos, James P.; Brennan, John K.

    2017-01-01

    In discrete particle simulations, when an atomistic model is coarse-grained, a tradeoff is made: a boost in computational speed for a reduction in accuracy. The Dissipative Particle Dynamics (DPD) methods help to recover lost accuracy of the viscous and thermal properties, while giving back a relatively small amount of computational speed. Since its initial development for polymers, one of the most notable extensions of DPD has been the introduction of chemical reactivity, called DPD-RX. In 2007, Maillet, Soulard, and Stoltz introduced implicit chemical reactivity in DPD through the concept of particle reactors and simulated the decomposition of liquid nitromethane. We present an extended and generalized version of the DPD-RX method, and have applied it to solid hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX). Demonstration simulations of reacting RDX are performed under shock conditions using a recently developed single-site coarse-grain model and a reduced RDX decomposition mechanism. A description of the methods used to simulate RDX and its transition to hot product gases within DPD-RX is presented. Additionally, we discuss several examples of the effect of shock speed and microstructure on the corresponding material chemistry.

  8. Post-hoc simulation study to adopt a computerized adaptive testing (CAT) for a Korean Medical License Examination.

    PubMed

    Seo, Dong Gi; Choi, Jeongwook

    2018-05-17

    Computerized adaptive testing (CAT) has been adopted in license examinations due to a test efficiency and accuracy. Many research about CAT have been published to prove the efficiency and accuracy of measurement. This simulation study investigated scoring method and item selection methods to implement CAT in Korean medical license examination (KMLE). This study used post-hoc (real data) simulation design. The item bank used in this study was designed with all items in a 2017 KMLE. All CAT algorithms for this study were implemented by a 'catR' package in R program. In terms of accuracy, Rasch and 2parametric logistic (PL) model performed better than 3PL model. Modal a Posteriori (MAP) or Expected a Posterior (EAP) provided more accurate estimates than MLE and WLE. Furthermore Maximum posterior weighted information (MPWI) or Minimum expected posterior variance (MEPV) performed better than other item selection methods. In terms of efficiency, Rasch model was recommended to reduce test length. Simulation study should be performed under varied test conditions before adopting a live CAT. Based on a simulation study, specific scoring and item selection methods should be predetermined before implementing a live CAT.

  9. Sensing and Active Flow Control for Advanced BWB Propulsion-Airframe Integration Concepts

    NASA Technical Reports Server (NTRS)

    Fleming, John; Anderson, Jason; Ng, Wing; Harrison, Neal

    2005-01-01

    In order to realize the substantial performance benefits of serpentine boundary layer ingesting diffusers, this study investigated the use of enabling flow control methods to reduce engine-face flow distortion. Computational methods and novel flow control modeling techniques were utilized that allowed for rapid, accurate analysis of flow control geometries. Results were validated experimentally using the Techsburg Ejector-based wind tunnel facility; this facility is capable of simulating the high-altitude, high subsonic Mach number conditions representative of BWB cruise conditions.

  10. A statistical data assimilation method for seasonal streamflow forecasting to optimize hydropower reservoir management in data-scarce regions

    NASA Astrophysics Data System (ADS)

    Arsenault, R.; Mai, J.; Latraverse, M.; Tolson, B.

    2017-12-01

    Probabilistic ensemble forecasts generated by the ensemble streamflow prediction (ESP) methodology are subject to biases due to errors in the hydrological model's initial states. In day-to-day operations, hydrologists must compensate for discrepancies between observed and simulated states such as streamflow. However, in data-scarce regions, little to no information is available to guide the streamflow assimilation process. The manual assimilation process can then lead to more uncertainty due to the numerous options available to the forecaster. Furthermore, the model's mass balance may be compromised and could affect future forecasts. In this study we propose a data-driven approach in which specific variables that may be adjusted during assimilation are defined. The underlying principle was to identify key variables that would be the most appropriate to modify during streamflow assimilation depending on the initial conditions such as the time period of the assimilation, the snow water equivalent of the snowpack and meteorological conditions. The variables to adjust were determined by performing an automatic variational data assimilation on individual (or combinations of) model state variables and meteorological forcing. The assimilation aimed to simultaneously optimize: (1) the error between the observed and simulated streamflow at the timepoint where the forecasts starts and (2) the bias between medium to long-term observed and simulated flows, which were simulated by running the model with the observed meteorological data on a hindcast period. The optimal variables were then classified according to the initial conditions at the time period where the forecast is initiated. The proposed method was evaluated by measuring the average electricity generation of a hydropower complex in Québec, Canada driven by this method. A test-bed which simulates the real-world assimilation, forecasting, water release optimization and decision-making of a hydropower cascade was developed to assess the performance of each individual process in the reservoir management chain. Here the proposed method was compared to the PF algorithm while keeping all other elements intact. Preliminary results are encouraging in terms of power generation and robustness for the proposed approach.

  11. Blurred Star Image Processing for Star Sensors under Dynamic Conditions

    PubMed Central

    Zhang, Weina; Quan, Wei; Guo, Lei

    2012-01-01

    The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666

  12. A Level-set based framework for viscous simulation of particle-laden supersonic flows

    NASA Astrophysics Data System (ADS)

    Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.

    2017-06-01

    Particle-laden supersonic flows are important in natural and industrial processes, such as, volcanic eruptions, explosions, pneumatic conveyance of particle in material processing etc. Numerical study of such high-speed particle laden flows at the mesoscale calls for a numerical framework which allows simulation of supersonic flow around multiple moving solid objects. Only a few efforts have been made toward development of numerical frameworks for viscous simulation of particle-fluid interaction in supersonic flow regime. The current work presents a Cartesian grid based sharp-interface method for viscous simulations of interaction between supersonic flow with moving rigid particles. The no-slip boundary condition is imposed at the solid-fluid interfaces using a modified ghost fluid method (GFM). The current method is validated against the similarity solution of compressible boundary layer over flat-plate and benchmark numerical solution for steady supersonic flow over cylinder. Further validation is carried out against benchmark numerical results for shock induced lift-off of a cylinder in a shock tube. 3D simulation of steady supersonic flow over sphere is performed to compare the numerically obtained drag co-efficient with experimental results. A particle-resolved viscous simulation of shock interaction with a cloud of particles is performed to demonstrate that the current method is suitable for large-scale particle resolved simulations of particle-laden supersonic flows.

  13. Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections

    NASA Astrophysics Data System (ADS)

    Wakazuki, Y.

    2015-12-01

    A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.

  14. A Discrete Analysis of Non-reflecting Boundary Conditions for Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.; Atkins, Harold L.

    2003-01-01

    We present a discrete analysis of non-reflecting boundary conditions for the discontinuous Galerkin method. The boundary conditions considered in this paper include the recently proposed Perfectly Matched Layer absorbing boundary condition for the linearized Euler equation and two non-reflecting boundary conditions based on the characteristic decomposition of the flux on the boundary. The analyses for the three boundary conditions are carried out in a unifled way. In each case, eigensolutions of the discrete system are obtained and applied to compute the numerical reflection coefficients of a specified out-going wave. The dependencies of the reflections at the boundary on the out-going wave angle and frequency as well as the mesh sizes arc? studied. Comparisons with direct numerical simulation results are also presented.

  15. A multiscale quantum mechanics/electromagnetics method for device simulations.

    PubMed

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-07

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  16. Evaluation of the groundwater-flow model for the Ohio River alluvial aquifer near Carrollton, Kentucky, updated to conditions in September 2010

    USGS Publications Warehouse

    Unthank, Michael D.

    2013-01-01

    The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with revised estimates of conductance values for the riverbed and valley wall, hydraulic conductivities for the model layer, and target water levels for future simulations. Additional model layers, a redesigned model grid, and revised boundary conditions could provide a better framework for more accurate simulations. Additional numerical methods would identify possible parameter estimates and determine parameter sensitivities.

  17. A wall interference assessment/correction system

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Ulbrich, N.; Sickles, W. L.; Qian, Cathy X.

    1992-01-01

    A Wall Signature method, the Hackett method, has been selected to be adapted for the 12-ft Wind Tunnel wall interference assessment/correction (WIAC) system in the present phase. This method uses limited measurements of the static pressure at the wall, in conjunction with the solid wall boundary condition, to determine the strength and distribution of singularities representing the test article. The singularities are used in turn for estimating wall interferences at the model location. The Wall Signature method will be formulated for application to the unique geometry of the 12-ft Tunnel. The development and implementation of a working prototype will be completed, delivered and documented with a software manual. The WIAC code will be validated by conducting numerically simulated experiments rather than actual wind tunnel experiments. The simulations will be used to generate both free-air and confined wind-tunnel flow fields for each of the test articles over a range of test configurations. Specifically, the pressure signature at the test section wall will be computed for the tunnel case to provide the simulated 'measured' data. These data will serve as the input for the WIAC method-Wall Signature method. The performance of the WIAC method then may be evaluated by comparing the corrected parameters with those for the free-air simulation. Each set of wind tunnel/test article numerical simulations provides data to validate the WIAC method. A numerical wind tunnel test simulation is initiated to validate the WIAC methods developed in the project. In the present reported period, the blockage correction has been developed and implemented for a rectangular tunnel as well as the 12-ft Pressure Tunnel. An improved wall interference assessment and correction method for three-dimensional wind tunnel testing is presented in the appendix.

  18. Petascale turbulence simulation using a highly parallel fast multipole method on GPUs

    NASA Astrophysics Data System (ADS)

    Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji

    2013-03-01

    This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.

  19. Principles of magnetohydrodynamic simulation in space plasmas

    NASA Technical Reports Server (NTRS)

    Sato, T.

    1985-01-01

    Attention is given to the philosophical as well as physical principles that are essential to the establishment of MHD simulation studies for solar plasma research, assuming the capabilities of state-of-the-art computers and emphasizing the importance of 'local' MHD simulation. Solar-terrestrial plasma space is divided into several elementary regions where a macroscopic elementary energy conversion process could conceivably occur; the local MHD simulation is defined as self-contained in each of the regions. The importance of, and the difficulties associated with, the boundary condition are discussed in detail. The roles of diagnostics and of the finite difference method are noted.

  20. IRFK2D: a computer program for simulating intrinsic random functions of order k

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Dowd, Peter A.

    2003-07-01

    IRFK2D is an ANSI Fortran-77 program that generates realizations of an intrinsic function of order k (with k equal to 0, 1 or 2) with a permissible polynomial generalized covariance model. The realizations may be non-conditional or conditioned to the experimental data. The turning bands method is used to generate realizations in 2D and 3D from simulations of an intrinsic random function of order k along lines that span the 2D or 3D space. The program generates two output files, the first containing the simulated values and the second containing the theoretical generalized variogram for different directions together with the theoretical model. The experimental variogram is calculated from the simulated values while the theoretical variogram is the specified generalized covariance model. The generalized variogram is used to assess the quality of the simulation as measured by the extent to which the generalized covariance is reproduced by the simulation. The examples given in this paper indicate that IRFK2D is an efficient implementation of the methodology.

  1. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    PubMed

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  2. Psychophysiological Assessment in Pilots Performing Challenging Simulated and Real Flight Maneuvers.

    PubMed

    Johannes, Bernd; Rothe, Stefanie; Gens, André; Westphal, Soeren; Birkenfeld, Katja; Mulder, Edwin; Rittweger, Jörn; Ledderhos, Carla

    2017-09-01

    The objective assessment of psychophysiological arousal during challenging flight maneuvers is of great interest to aerospace medicine, but remains a challenging task. In the study presented here, a vector-methodological approach was used which integrates different psychophysiological variables, yielding an integral arousal index called the Psychophysiological Arousal Value (PAV). The arousal levels of 15 male pilots were assessed during predetermined, well-defined flight maneuvers performed under simulated and real flight conditions. The physiological data, as expected, revealed inter- and intra-individual differences for the various measurement conditions. As indicated by the PAV, air-to-air refueling (AAR) turned out to be the most challenging task. In general, arousal levels were comparable between simulator and real flight conditions. However, a distinct difference was observed when the pilots were divided by instructors into two groups based on their proficiency in AAR with AWACS (AAR-Novices vs. AAR-Professionals). AAR-Novices had on average more than 2000 flight hours on other aircrafts. They showed higher arousal reactions to AAR in real flight (contact: PAV score 8.4 ± 0.37) than under simulator conditions (7.1 ± 0.30), whereas AAR-Professionals did not (8.5 ± 0.46 vs. 8.8 ± 0.80). The psychophysiological arousal value assessment was tested in field measurements, yielding quantifiable arousal differences between proficiency groups of pilots during simulated and real flight conditions. The method used in this study allows an evaluation of the psychophysiological cost during a certain flying performance and thus is possibly a valuable tool for objectively evaluating the actual skill status of pilots.Johannes B, Rothe S, Gens A, Westphal S, Birkenfeld K, Mulder E, Rittweger J, Ledderhos C. Psychophysiological assessment in pilots performing challenging simulated and real flight maneuvers. Aerosp Med Hum Perform. 2017; 88(9):834-840.

  3. Verification of recursive probabilistic integration (RPI) method for fatigue life management using non-destructive inspections

    NASA Astrophysics Data System (ADS)

    Chen, Tzikang J.; Shiao, Michael

    2016-04-01

    This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.

  4. HYPNOTIC TACTILE ANESTHESIA: Psychophysical and Signal-Detection Analyses

    PubMed Central

    Tataryn, Douglas J.; Kihlstrom, John F.

    2017-01-01

    Two experiments that studied the effects of hypnotic suggestions on tactile sensitivity are reported. Experiment 1 found that suggestions for anesthesia, as measured by both traditional psychophysical methods and signal detection procedures, were linearly related to hypnotizability. Experiment 2 employed the same methodologies in an application of the real-simulator paradigm to examine the effects of suggestions for both anesthesia and hyperesthesia. Significant effects of hypnotic suggestion on both sensitivity and bias were found in the anesthesia condition but not for the hyperesthesia condition. A new bias parameter, C′, indicated that much of the bias found in the initial analyses was artifactual, a function of changes in sensitivity across conditions. There were no behavioral differences between reals and simulators in any of the conditions, though analyses of postexperimental interviews suggested the 2 groups had very different phenomenal experiences. PMID:28230465

  5. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  6. Modeling the Hyperdistribution of Item Parameters To Improve the Accuracy of Recovery in Estimation Procedures.

    ERIC Educational Resources Information Center

    Matthews-Lopez, Joy L.; Hombo, Catherine M.

    The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…

  7. Peptide kinetics from picoseconds to microseconds using boxed molecular dynamics: Power law rate coefficients in cyclisation reactions

    NASA Astrophysics Data System (ADS)

    Shalashilin, Dmitrii V.; Beddard, Godfrey S.; Paci, Emanuele; Glowacki, David R.

    2012-10-01

    Molecular dynamics (MD) methods are increasingly widespread, but simulation of rare events in complex molecular systems remains a challenge. We recently introduced the boxed molecular dynamics (BXD) method, which accelerates rare events, and simultaneously provides both kinetic and thermodynamic information. We illustrate how the BXD method may be used to obtain high-resolution kinetic data from explicit MD simulations, spanning picoseconds to microseconds. The method is applied to investigate the loop formation dynamics and kinetics of cyclisation for a range of polypeptides, and recovers a power law dependence of the instantaneous rate coefficient over six orders of magnitude in time, in good agreement with experimental observations. Analysis of our BXD results shows that this power law behaviour arises when there is a broad and nearly uniform spectrum of reaction rate coefficients. For the systems investigated in this work, where the free energy surfaces have relatively small barriers, the kinetics is very sensitive to the initial conditions: strongly non-equilibrium conditions give rise to power law kinetics, while equilibrium initial conditions result in a rate coefficient with only a weak dependence on time. These results suggest that BXD may offer us a powerful and general algorithm for describing kinetics and thermodynamics in chemical and biochemical systems.

  8. A brightness exceeding simulated Langmuir limit

    NASA Astrophysics Data System (ADS)

    Nakasuji, Mamoru

    2013-08-01

    When an excitation of the first lens determines a beam is parallel beam, a brightness that is 100 times higher than Langmuir limit is measured experimentally, where Langmuir limits are estimated using a simulated axial cathode current density which is simulated based on a measured emission current. The measured brightness is comparable to Langmuir limit, when the lens excitation is such that an image position is slightly shorter than a lens position. Previously measured values of brightness for cathode apical radii of curvature 20, 60, 120, 240, and 480 μm were 8.7, 5.3, 3.3, 2.4, and 3.9 times higher than their corresponding Langmuir limits, respectively, in this experiment, the lens excitation was such that the lens and the image positions were 180 mm and 400 mm, respectively. From these measured brightness for three different lens excitation conditions, it is concluded that the brightness depends on the first lens excitation. For the electron gun operated in a space charge limited condition, some of the electrons emitted from the cathode are returned to the cathode without having crossed a virtual cathode. Therefore, method that assumes a Langmuir limit defining method using a Maxwellian distribution of electron velocities may need to be revised. For the condition in which the values of the exceeding the Langmuir limit are measured, the simulated trajectories of electrons that are emitted from the cathode do not cross the optical axis at the crossover, thus the law of sines may not be valid for high brightness electron beam systems.

  9. A VERSATILE SHARP INTERFACE IMMERSED BOUNDARY METHOD FOR INCOMPRESSIBLE FLOWS WITH COMPLEX BOUNDARIES

    PubMed Central

    Mittal, R.; Dong, H.; Bozkurttas, M.; Najjar, F.M.; Vargas, A.; von Loebbecke, A.

    2010-01-01

    A sharp interface immersed boundary method for simulating incompressible viscous flow past three-dimensional immersed bodies is described. The method employs a multi-dimensional ghost-cell methodology to satisfy the boundary conditions on the immersed boundary and the method is designed to handle highly complex three-dimensional, stationary, moving and/or deforming bodies. The complex immersed surfaces are represented by grids consisting of unstructured triangular elements; while the flow is computed on non-uniform Cartesian grids. The paper describes the salient features of the methodology with special emphasis on the immersed boundary treatment for stationary and moving boundaries. Simulations of a number of canonical two- and three-dimensional flows are used to verify the accuracy and fidelity of the solver over a range of Reynolds numbers. Flow past suddenly accelerated bodies are used to validate the solver for moving boundary problems. Finally two cases inspired from biology with highly complex three-dimensional bodies are simulated in order to demonstrate the versatility of the method. PMID:20216919

  10. Uncertainty in simulated groundwater-quality trends in transient flow

    USGS Publications Warehouse

    Starn, J. Jeffrey; Bagtzoglou, Amvrossios; Robbins, Gary A.

    2013-01-01

    In numerical modeling of groundwater flow, the result of a given solution method is affected by the way in which transient flow conditions and geologic heterogeneity are simulated. An algorithm is demonstrated that simulates breakthrough curves at a pumping well by convolution-based particle tracking in a transient flow field for several synthetic basin-scale aquifers. In comparison to grid-based (Eulerian) methods, the particle (Lagrangian) method is better able to capture multimodal breakthrough caused by changes in pumping at the well, although the particle method may be apparently nonlinear because of the discrete nature of particle arrival times. Trial-and-error choice of number of particles and release times can perhaps overcome the apparent nonlinearity. Heterogeneous aquifer properties tend to smooth the effects of transient pumping, making it difficult to separate their effects in parameter estimation. Porosity, a new parameter added for advective transport, can be accurately estimated using both grid-based and particle-based methods, but predictions can be highly uncertain, even in the simple, nonreactive case.

  11. The Researches on Damage Detection Method for Truss Structures

    NASA Astrophysics Data System (ADS)

    Wang, Meng Hong; Cao, Xiao Nan

    2018-06-01

    This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.

  12. Cluster Free Energies from Simple Simulations of Small Numbers of Aggregants: Nucleation of Liquid MTBE from Vapor and Aqueous Phases.

    PubMed

    Patel, Lara A; Kindt, James T

    2017-03-14

    We introduce a global fitting analysis method to obtain free energies of association of noncovalent molecular clusters using equilibrated cluster size distributions from unbiased constant-temperature molecular dynamics (MD) simulations. Because the systems simulated are small enough that the law of mass action does not describe the aggregation statistics, the method relies on iteratively determining a set of cluster free energies that, using appropriately weighted sums over all possible partitions of N monomers into clusters, produces the best-fit size distribution. The quality of these fits can be used as an objective measure of self-consistency to optimize the cutoff distance that determines how clusters are defined. To showcase the method, we have simulated a united-atom model of methyl tert-butyl ether (MTBE) in the vapor phase and in explicit water solution over a range of system sizes (up to 95 MTBE in the vapor phase and 60 MTBE in the aqueous phase) and concentrations at 273 K. The resulting size-dependent cluster free energy functions follow a form derived from classical nucleation theory (CNT) quite well over the full range of cluster sizes, although deviations are more pronounced for small cluster sizes. The CNT fit to cluster free energies yielded surface tensions that were in both cases lower than those for the simulated planar interfaces. We use a simple model to derive a condition for minimizing non-ideal effects on cluster size distributions and show that the cutoff distance that yields the best global fit is consistent with this condition.

  13. Identification of pre-impact conditions of a cyclist involved in a vehicle-bicycle accident using an optimized MADYMO reconstruction combined with motion capture.

    PubMed

    Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu

    2018-05-01

    The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.

  14. Assigning Robust Default Values in Building Performance Simulation Software for Improved Decision-Making in the Initial Stages of Building Design.

    PubMed

    Hiyama, Kyosuke

    2015-01-01

    Applying data mining techniques on a database of BIM models could provide valuable insights in key design patterns implicitly present in these BIM models. The architectural designer would then be able to use previous data from existing building projects as default values in building performance simulation software for the early phases of building design. The author has proposed the method to minimize the magnitude of the variation in these default values in subsequent design stages. This approach maintains the accuracy of the simulation results in the initial stages of building design. In this study, a more convincing argument is presented to demonstrate the significance of the new method. The variation in the ideal default values for different building design conditions is assessed first. Next, the influence of each condition on these variations is investigated. The space depth is found to have a large impact on the ideal default value of the window to wall ratio. In addition, the presence or absence of lighting control and natural ventilation has a significant influence on the ideal default value. These effects can be used to identify the types of building conditions that should be considered to determine the ideal default values.

  15. Assigning Robust Default Values in Building Performance Simulation Software for Improved Decision-Making in the Initial Stages of Building Design

    PubMed Central

    2015-01-01

    Applying data mining techniques on a database of BIM models could provide valuable insights in key design patterns implicitly present in these BIM models. The architectural designer would then be able to use previous data from existing building projects as default values in building performance simulation software for the early phases of building design. The author has proposed the method to minimize the magnitude of the variation in these default values in subsequent design stages. This approach maintains the accuracy of the simulation results in the initial stages of building design. In this study, a more convincing argument is presented to demonstrate the significance of the new method. The variation in the ideal default values for different building design conditions is assessed first. Next, the influence of each condition on these variations is investigated. The space depth is found to have a large impact on the ideal default value of the window to wall ratio. In addition, the presence or absence of lighting control and natural ventilation has a significant influence on the ideal default value. These effects can be used to identify the types of building conditions that should be considered to determine the ideal default values. PMID:26090512

  16. High fidelity simulations of infrared imagery with animated characters

    NASA Astrophysics Data System (ADS)

    Näsström, F.; Persson, A.; Bergström, D.; Berggren, J.; Hedström, J.; Allvar, J.; Karlsson, M.

    2012-06-01

    High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of characters. Simplified rendering methods based on computer graphics methods can be used to overcome these limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of animated people in terrain. Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models, these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that, together with the terrain model, are used to produce high fidelity IR imagery of people or crowds. For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed. There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility of HLAS to add animation into an HLA enabled sensor system simulation framework.

  17. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  18. Optical fiber sensor of partial discharges in High Voltage DC experiments

    NASA Astrophysics Data System (ADS)

    Búa-Núñez, I.; Azcárraga-Ramos, C. G.; Posada-Román, J. E.; Garcia-Souto, J. A.

    2014-05-01

    A setup simulating High Voltage DC (HVDC) transformers barriers was developed to demonstrate the effectiveness of an optical fiber (OF) sensor in detecting partial discharges (PD) under these peculiar conditions. Different PD detection techniques were compared: electrical methods, and acoustic methods. Standard piezoelectric sensors (R15i-AST) and the above mentioned OF sensors were used for acoustic detection. The OF sensor was able to detect PD acoustically with a sensitivity better than the other detection methods. The multichannel instrumentation system was tested in real HVDC conditions with the aim of analyzing the behavior of the insulation (mineral oil/pressboard).

  19. A spectral approach for discrete dislocation dynamics simulations of nanoindentation

    NASA Astrophysics Data System (ADS)

    Bertin, Nicolas; Glavas, Vedran; Datta, Dibakar; Cai, Wei

    2018-07-01

    We present a spectral approach to perform nanoindentation simulations using three-dimensional nodal discrete dislocation dynamics. The method relies on a two step approach. First, the contact problem between an indenter of arbitrary shape and an isotropic elastic half-space is solved using a spectral iterative algorithm, and the contact pressure is fully determined on the half-space surface. The contact pressure is then used as a boundary condition of the spectral solver to determine the resulting stress field produced in the simulation volume. In both stages, the mechanical fields are decomposed into Fourier modes and are efficiently computed using fast Fourier transforms. To further improve the computational efficiency, the method is coupled with a subcycling integrator and a special approach is devised to approximate the displacement field associated with surface steps. As a benchmark, the method is used to compute the response of an elastic half-space using different types of indenter. An example of a dislocation dynamics nanoindentation simulation with complex initial microstructure is presented.

  20. Numerical Simulations of Hypersonic Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Bartkowicz, Matthew David

    Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.

Top