Sample records for scaling analysis simulation

  1. Analysis of the Empathic Concern Subscale of the Emotional Response Questionnaire in a Study Evaluating the Impact of a 3D Cultural Simulation.

    PubMed

    Everson, Naleya; Levett-Jones, Tracy; Pitt, Victoria; Lapkin, Samuel; Van Der Riet, Pamela; Rossiter, Rachel; Jones, Donovan; Gilligan, Conor; Courtney Pratt, Helen

    2018-04-25

    Abstract Background Empathic concern has been found to decline in health professional students. Few effective educational programs and a lack of validated scales are reported. Previous analysis of the Empathic Concern scale of the Emotional Response Questionnaire has reported both one and two latent constructs. Aim To evaluate the impact of simulation on nursing students' empathic concern and test the psychometric properties of the Empathic Concern scale. Methods The study used a one group pre-test post-test design with a convenience sample of 460 nursing students. Empathic concern was measured pre-post simulation with the Empathic Concern scale. Factor Analysis was undertaken to investigate the structure of the scale. Results There was a statistically significant increase in Empathic Concern scores between pre-simulation 5.57 (SD = 1.04) and post-simulation 6.10 (SD = 0.95). Factor analysis of the Empathic Concern scale identified one latent dimension. Conclusion Immersive simulation may promote empathic concern. The Empathic Concern scale measured a single latent construct in this cohort.

  2. Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott

    2017-11-01

    Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.

  3. Impacts of different characterizations of large-scale background on simulated regional-scale ozone over the continental United States

    NASA Astrophysics Data System (ADS)

    Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.

    2018-03-01

    This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boundary conditions derived from hemispheric or global-scale models. The Community Multiscale Air Quality (CMAQ) model simulations supporting this analysis were performed over the continental US for the year 2010 within the context of the Air Quality Model Evaluation International Initiative (AQMEII) and Task Force on Hemispheric Transport of Air Pollution (TF-HTAP) activities. CMAQ process analysis (PA) results highlight the dominant role of horizontal and vertical advection on the ozone burden in the mid-to-upper troposphere and lower stratosphere. Vertical mixing, including mixing by convective clouds, couples fluctuations in free-tropospheric ozone to ozone in lower layers. Hypothetical bounding scenarios were performed to quantify the effects of emissions, boundary conditions, and ozone dry deposition on the simulated ozone burden. Analysis of these simulations confirms that the characterization of ozone outside the regional-scale modeling domain can have a profound impact on simulated regional-scale ozone. This was further investigated by using data from four hemispheric or global modeling systems (Chemistry - Integrated Forecasting Model (C-IFS), CMAQ extended for hemispheric applications (H-CMAQ), the Goddard Earth Observing System model coupled to chemistry (GEOS-Chem), and AM3) to derive alternate boundary conditions for the regional-scale CMAQ simulations. The regional-scale CMAQ simulations using these four different boundary conditions showed that the largest ozone abundance in the upper layers was simulated when using boundary conditions from GEOS-Chem, followed by the simulations using C-IFS, AM3, and H-CMAQ boundary conditions, consistent with the analysis of the ozone fields from the global models along the CMAQ boundaries. Using boundary conditions from AM3 yielded higher springtime ozone columns burdens in the middle and lower troposphere compared to boundary conditions from the other models. For surface ozone, the differences between the AM3-driven CMAQ simulations and the CMAQ simulations driven by other large-scale models are especially pronounced during spring and winter where they can reach more than 10 ppb for seasonal mean ozone mixing ratios and as much as 15 ppb for domain-averaged daily maximum 8 h average ozone on individual days. In contrast, the differences between the C-IFS-, GEOS-Chem-, and H-CMAQ-driven regional-scale CMAQ simulations are typically smaller. Comparing simulated surface ozone mixing ratios to observations and computing seasonal and regional model performance statistics revealed that boundary conditions can have a substantial impact on model performance. Further analysis showed that boundary conditions can affect model performance across the entire range of the observed distribution, although the impacts tend to be lower during summer and for the very highest observed percentiles. The results are discussed in the context of future model development and analysis opportunities.

  4. AQMEII3 evaluation of regional NA/EU simulations and analysis of scale, boundary conditions and emissions error-dependence

    EPA Science Inventory

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  5. Using Wavelet Analysis To Assist in Identification of Significant Events in Molecular Dynamics Simulations.

    PubMed

    Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E

    2016-07-25

    Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.

  6. Development and validation of the Simulation Learning Effectiveness Scale for nursing students.

    PubMed

    Pai, Hsiang-Chu

    2016-11-01

    To develop and validate the Simulation Learning Effectiveness Scale, which is based on Bandura's social cognitive theory. A simulation programme is a significant teaching strategy for nursing students. Nevertheless, there are few evidence-based instruments that validate the effectiveness of simulation learning in Taiwan. This is a quantitative descriptive design. In Study 1, a nonprobability convenience sample of 151 student nurses completed the Simulation Learning Effectiveness Scale. Exploratory factor analysis was used to examine the factor structure of the instrument. In Study 2, which involved 365 student nurses, confirmatory factor analysis and structural equation modelling were used to analyse the construct validity of the Simulation Learning Effectiveness Scale. In Study 1, exploratory factor analysis yielded three components: self-regulation, self-efficacy and self-motivation. The three factors explained 29·09, 27·74 and 19·32% of the variance, respectively. The final 12-item instrument with the three factors explained 76·15% of variance. Cronbach's alpha was 0·94. In Study 2, confirmatory factor analysis identified a second-order factor termed Simulation Learning Effectiveness Scale. Goodness-of-fit indices showed an acceptable fit overall with the full model (χ 2 /df (51) = 3·54, comparative fit index = 0·96, Tucker-Lewis index = 0·95 and standardised root-mean-square residual = 0·035). In addition, teacher's competence was found to encourage learning, and self-reflection and insight were significantly and positively associated with Simulation Learning Effectiveness Scale. Teacher's competence in encouraging learning also was significantly and positively associated with self-reflection and insight. Overall, theses variable explained 21·9% of the variance in the student's learning effectiveness. The Simulation Learning Effectiveness Scale is a reliable and valid means to assess simulation learning effectiveness for nursing students. The Simulation Learning Effectiveness Scale can be used to examine nursing students' learning effectiveness and serve as a basis to improve student's learning efficiency through simulation programmes. Future implementation research that focuses on the relationship between learning effectiveness and nursing competence in nursing students is recommended. © 2016 John Wiley & Sons Ltd.

  7. Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis

    NASA Technical Reports Server (NTRS)

    Olevsky, Eugene; German, Randall M.

    2012-01-01

    A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.

  8. Multimodel Simulation of Water Flow: Uncertainty Analysis

    USDA-ARS?s Scientific Manuscript database

    Simulations of soil water flow require measurements of soil hydraulic properties which are particularly difficult at the field scale. Laboratory measurements provide hydraulic properties at scales finer than the field scale, whereas pedotransfer functions (PTFs) integrate information on hydraulic pr...

  9. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  10. Ascertaining Validity in the Abstract Realm of PMESII Simulation Models: An Analysis of the Peace Support Operations Model (PSOM)

    DTIC Science & Technology

    2009-06-01

    simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE

  11. 78 FR 71785 - Passenger Train Emergency Systems II

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-29

    ... in debriefing and critique sessions following emergency situations and full-scale simulations. DATES... Session Following Emergency Situations and Full-Scale Simulations V. Section-by-Section Analysis A... and simulations. As part of these amendments, FRA is incorporating by reference three American Public...

  12. A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS

    EPA Science Inventory

    Fine-scale Computational Fluid Dynamics (CFD) simulation of pollutant concentrations within roadway and building microenvironments is feasible using high performance computing. Unlike currently used regulatory air quality models, fine-scale CFD simulations are able to account rig...

  13. Quantitative analysis of voids in percolating structures in two-dimensional N-body simulations

    NASA Technical Reports Server (NTRS)

    Harrington, Patrick M.; Melott, Adrian L.; Shandarin, Sergei F.

    1993-01-01

    We present in this paper a quantitative method for defining void size in large-scale structure based on percolation threshold density. Beginning with two-dimensional gravitational clustering simulations smoothed to the threshold of nonlinearity, we perform percolation analysis to determine the large scale structure. The resulting objective definition of voids has a natural scaling property, is topologically interesting, and can be applied immediately to redshift surveys.

  14. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    NASA Astrophysics Data System (ADS)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  15. Dynamics analysis of the fast-slow hydro-turbine governing system with different time-scale coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Chen, Diyi; Wu, Changzhi; Wang, Xiangyu

    2018-01-01

    Multi-time scales modeling of hydro-turbine governing system is crucial in precise modeling of hydropower plant and provides support for the stability analysis of the system. Considering the inertia and response time of the hydraulic servo system, the hydro-turbine governing system is transformed into the fast-slow hydro-turbine governing system. The effects of the time-scale on the dynamical behavior of the system are analyzed and the fast-slow dynamical behaviors of the system are investigated with different time-scale. Furthermore, the theoretical analysis of the stable regions is presented. The influences of the time-scale on the stable region are analyzed by simulation. The simulation results prove the correctness of the theoretical analysis. More importantly, the methods and results of this paper provide a perspective to multi-time scales modeling of hydro-turbine governing system and contribute to the optimization analysis and control of the system.

  16. Mercury and methylmercury stream concentrations in a Coastal Plain watershed: A multi-scale simulation analysis

    USGS Publications Warehouse

    Knightes, Christopher D.; Golden, Heather E.; Journey, Celeste A.; Davis, Gary M.; Conrads, Paul; Marvin-DiPasquale, Mark; Brigham, Mark E.; Bradley, Paul M.

    2014-01-01

    Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km2) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km2 and 25 km2) and the encompassing watershed (79 km2). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  18. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  19. HRLSim: a high performance spiking neural network simulator for GPGPU clusters.

    PubMed

    Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan

    2014-02-01

    Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.

  20. Development and psychometric testing of the satisfaction with Cultural Simulation Experience Scale.

    PubMed

    Courtney-Pratt, Helen; Levett-Jones, Tracy; Lapkin, Samuel; Pitt, Victoria; Gilligan, Conor; Van der Riet, Pamela; Rossiter, Rachel; Jones, Donovan; Everson, Naleya

    2015-11-01

    Decreasing the numbers of adverse health events experienced by people from culturally diverse backgrounds rests, in part, on the ability of education providers to provide quality learning experiences that support nursing students in developing cultural competence, an essential professional attribute. This paper reports on the implementation and evaluation of an immersive 3D cultural empathy simulation. The Satisfaction with Cultural Simulation Experience Scale used in this study was adapted and validated as the first stage of this study. Exploratory factor analysis and confirmatory factor analysis were undertaken to investigate the psychometric properties of the scale using two randomly-split sub-samples. Cronbach's Alpha was used to examine internal consistency reliability. Descriptive statistics were used for analysis of mean satisfaction scores and qualitative comments to open-ended questions were analysed and coded. A purposive sample (n = 497) of second of nursing students participated in the study. The overall Cronbach's alpha for the scale was 0.95 and each subscale demonstrated high internal consistency: 0.92; 0.92; 0.72 respectively. The mean satisfaction score was 4.64 (SD 0.51) out of a maximum of 5 indicating a high level of participant satisfaction with the simulation. Three factors emerged from qualitative analysis: "Becoming culturally competent", "Learning from the debrief" and "Reflecting on practice". The cultural simulation was highly regarded by students. Psychometric testing of the Satisfaction with Cultural Simulation Experience Scale demonstrated that it is a reliable instrument. However, there is room for improvement and further testing in other contexts is therefore recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  2. Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections

    NASA Astrophysics Data System (ADS)

    Wakazuki, Y.

    2015-12-01

    A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.

  3. The Analysis, Numerical Simulation, and Diagnosis of Extratropical Weather Systems

    DTIC Science & Technology

    1999-09-30

    The Analysis, Numerical Simulation, and Diagnosis of Extratropical Weather Systems Dr. Melvyn A. Shapiro NOAA/Environmental Technology Laboratory...formulation, and numerical prediction of the life cycles of synoptic-scale and mesoscale extratropical weather systems, including the influence of planetary...scale inter-annual and intra-seasonal variability on their evolution. These weather systems include: extratropical oceanic and land-falling cyclones

  4. SMR Re-Scaling and Modeling for Load Following Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoover, K.; Wu, Q.; Bragg-Sitton, S.

    2016-11-01

    This study investigates the creation of a new set of scaling parameters for the Oregon State University Multi-Application Small Light Water Reactor (MASLWR) scaled thermal hydraulic test facility. As part of a study being undertaken by Idaho National Lab involving nuclear reactor load following characteristics, full power operations need to be simulated, and therefore properly scaled. Presented here is the scaling analysis and plans for RELAP5-3D simulation.

  5. Mercury and methylmercury stream concentrations in a Coastal Plain watershed: a multi-scale simulation analysis.

    PubMed

    Knightes, C D; Golden, H E; Journey, C A; Davis, G M; Conrads, P A; Marvin-DiPasquale, M; Brigham, M E; Bradley, P M

    2014-04-01

    Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km(2)) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km(2) and 25 km(2)) and the encompassing watershed (79 km(2)). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport. Published by Elsevier Ltd.

  6. Scaling Analysis of Alloy Solidification and Fluid Flow in a Rectangular Cavity

    NASA Astrophysics Data System (ADS)

    Plotkowski, A.; Fezi, K.; Krane, M. J. M.

    A scaling analysis was performed to predict trends in alloy solidification in a side-cooled rectangular cavity. The governing equations for energy and momentum were scaled in order to determine the dependence of various aspects of solidification on the process parameters for a uniform initial temperature and an isothermal boundary condition. This work improved on previous analyses by adding considerations for the cooling bulk fluid flow. The analysis predicted the time required to extinguish the superheat, the maximum local solidification time, and the total solidification time. The results were compared to a numerical simulation for a Al-4.5 wt.% Cu alloy with various initial and boundary conditions. Good agreement was found between the simulation results and the trends predicted by the scaling analysis.

  7. Multiscale analysis of structure development in expanded starch snacks

    NASA Astrophysics Data System (ADS)

    van der Sman, R. G. M.; Broeze, J.

    2014-11-01

    In this paper we perform a multiscale analysis of the food structuring process of the expansion of starchy snack foods like keropok, which obtains a solid foam structure. In particular, we want to investigate the validity of the hypothesis of Kokini and coworkers, that expansion is optimal at the moisture content, where the glass transition and the boiling line intersect. In our analysis we make use of several tools, (1) time scale analysis from the field of physical transport phenomena, (2) the scale separation map (SSM) developed within a multiscale simulation framework of complex automata, (3) the supplemented state diagram (SSD), depicting phase transition and glass transition lines, and (4) a multiscale simulation model for the bubble expansion. Results of the time scale analysis are plotted in the SSD, and give insight into the dominant physical processes involved in expansion. Furthermore, the results of the time scale analysis are used to construct the SSM, which has aided us in the construction of the multiscale simulation model. Simulation results are plotted in the SSD. This clearly shows that the hypothesis of Kokini is qualitatively true, but has to be refined. Our results show that bubble expansion is optimal for moisture content, where the boiling line for gas pressure of 4 bars intersects the isoviscosity line of the critical viscosity 106 Pa.s, which runs parallel to the glass transition line.

  8. State-resolved Thermal/Hyperthermal Dynamics of Atmospheric Species

    DTIC Science & Technology

    2015-06-23

    gas -room temperature ionic liquid (RTIL) interfaces. 2) Large scale trajectory simulations for theoretical analysis of gas - liquid scattering studies...areas: 1) Diode laser and LIF studies of hyperthermal CO2 and NO collisions at the gas -room temperature ionic liquid (RTIL) interfaces. 2) Large...scale trajectory simulations for theoretical analysis of gas - liquid scattering studies, 3) LIF data for state-resolved scattering of hyperthermal NO at

  9. Development and validation of the simulation-based learning evaluation scale.

    PubMed

    Hung, Chang-Chiao; Liu, Hsiu-Chen; Lin, Chun-Chih; Lee, Bih-O

    2016-05-01

    The instruments that evaluate a student's perception of receiving simulated training are English versions and have not been tested for reliability or validity. The aim of this study was to develop and validate a Chinese version Simulation-Based Learning Evaluation Scale (SBLES). Four stages were conducted to develop and validate the SBLES. First, specific desired competencies were identified according to the National League for Nursing and Taiwan Nursing Accreditation Council core competencies. Next, the initial item pool was comprised of 50 items related to simulation that were drawn from the literature of core competencies. Content validity was established by use of an expert panel. Finally, exploratory factor analysis and confirmatory factor analysis were conducted for construct validity, and Cronbach's coefficient alpha determined the scale's internal consistency reliability. Two hundred and fifty students who had experienced simulation-based learning were invited to participate in this study. Two hundred and twenty-five students completed and returned questionnaires (response rate=90%). Six items were deleted from the initial item pool and one was added after an expert panel review. Exploratory factor analysis with varimax rotation revealed 37 items remaining in five factors which accounted for 67% of the variance. The construct validity of SBLES was substantiated in a confirmatory factor analysis that revealed a good fit of the hypothesized factor structure. The findings tally with the criterion of convergent and discriminant validity. The range of internal consistency for five subscales was .90 to .93. Items were rated on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree). The results of this study indicate that the SBLES is valid and reliable. The authors recommend that the scale could be applied in the nursing school to evaluate the effectiveness of simulation-based learning curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Scaling analysis and SE simulation of the tilted cylinder-interface capillary interaction

    NASA Astrophysics Data System (ADS)

    Gao, S. Q.; Zhang, X. Y.; Zhou, Y. H.

    2018-06-01

    The capillary interaction induced by a tilted cylinder and interface is the basic configuration of many complex systems, such as micro-pillar arrays clustering, super-hydrophobicity of hairy surface, water-walking insects, and fiber aggregation. We systematically analyzed the scaling laws of tilt angle, contact angle, and cylinder radius on the contact line shape by SE simulation and experiment. The following in-depth analysis of the characteristic parameters (shift, stretch and distortion) of the deformed contact lines reveals the self-similar shape of contact line. Then a general capillary force scaling law is proposed to incredibly grasp all the simulated and experimental data by a quite straightforward ellipse approximation approach.

  11. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilke, Jeremiah J; Kenny, Joseph P.

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less

  12. Scaled boundary finite element simulation and modeling of the mechanical behavior of cracked nanographene sheets

    NASA Astrophysics Data System (ADS)

    Honarmand, M.; Moradi, M.

    2018-06-01

    In this paper, by using scaled boundary finite element method (SBFM), a perfect nanographene sheet or cracked ones were simulated for the first time. In this analysis, the atomic carbon bonds were modeled by simple bar elements with circular cross-sections. Despite of molecular dynamics (MD), the results obtained from SBFM analysis are quite acceptable for zero degree cracks. For all angles except zero, Griffith criterion can be applied for the relation between critical stress and crack length. Finally, despite the simplifications used in nanographene analysis, obtained results can simulate the mechanical behavior with high accuracy compared with experimental and MD ones.

  13. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  14. Scaling Properties of Arctic Sea Ice Deformation in a High‐Resolution Viscous‐Plastic Sea Ice Model and in Satellite Observations

    PubMed Central

    Losch, Martin; Menemenlis, Dimitris

    2018-01-01

    Abstract Sea ice models with the traditional viscous‐plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan‐Arctic sea ice‐ocean simulation, the small‐scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data. PMID:29576996

  15. Scaling Properties of Arctic Sea Ice Deformation in a High-Resolution Viscous-Plastic Sea Ice Model and in Satellite Observations

    NASA Astrophysics Data System (ADS)

    Hutter, Nils; Losch, Martin; Menemenlis, Dimitris

    2018-01-01

    Sea ice models with the traditional viscous-plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan-Arctic sea ice-ocean simulation, the small-scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.

  16. Scaling Properties of Arctic Sea Ice Deformation in a High-Resolution Viscous-Plastic Sea Ice Model and in Satellite Observations.

    PubMed

    Hutter, Nils; Losch, Martin; Menemenlis, Dimitris

    2018-01-01

    Sea ice models with the traditional viscous-plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan-Arctic sea ice-ocean simulation, the small-scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.

  17. Propulsion simulator for magnetically-suspended wind tunnel models

    NASA Technical Reports Server (NTRS)

    Joshi, Prakash B.; Goldey, C. L.; Sacco, G. P.; Lawing, Pierce L.

    1991-01-01

    The objective of phase two of a current investigation sponsored by NASA Langley Research Center is to demonstrate the measurement of aerodynamic forces/moments, including the effects of exhaust gases, in magnetic suspension and balance system (MSBS) wind tunnels. Two propulsion simulator models are being developed: a small-scale and a large-scale unit, both employing compressed, liquified carbon dioxide as propellant. The small-scale unit was designed, fabricated, and statically-tested at Physical Sciences Inc. (PSI). The large-scale simulator is currently in the preliminary design stage. The small-scale simulator design/development is presented, and the data from its static firing on a thrust stand are discussed. The analysis of this data provides important information for the design of the large-scale unit. A description of the preliminary design of the device is also presented.

  18. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  19. Galaxy two-point covariance matrix estimation for next generation surveys

    NASA Astrophysics Data System (ADS)

    Howlett, Cullan; Percival, Will J.

    2017-12-01

    We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.

  20. Simulation of Mesoscale Cellular Convection in Marine Stratocumulus. Part I: Drizzling Conditions

    DOE PAGES

    Zhou, Xiaoli; Ackerman, Andrew S.; Fridlind, Ann M.; ...

    2018-01-01

    This study uses eddy-permitting simulations to investigate the mechanisms that promote mesoscale variability of moisture in drizzling stratocumulus-topped marine boundary layers. Simulations show that precipitation tends to increase horizontal scales. Analysis of terms in the prognostic equation for total water mixing ratio variance indicates that moisture stratification plays a leading role in setting horizontal scales. This result is supported by simulations in which horizontal mean thermodynamic profiles are strongly nudged to their initial well-mixed state, which limits cloud scales. It is found that the spatial variability of subcloud moist cold pools surprisingly tends to respond to, rather than determine, themore » mesoscale variability, which may distinguish them from dry cold pools associated with deeper convection. Finally, simulations also indicate that moisture stratification increases cloud scales specifically by increasing latent heating within updrafts, which increases updraft buoyancy and favors greater horizontal scales.« less

  1. Simulation of Mesoscale Cellular Convection in Marine Stratocumulus. Part I: Drizzling Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Xiaoli; Ackerman, Andrew S.; Fridlind, Ann M.

    This study uses eddy-permitting simulations to investigate the mechanisms that promote mesoscale variability of moisture in drizzling stratocumulus-topped marine boundary layers. Simulations show that precipitation tends to increase horizontal scales. Analysis of terms in the prognostic equation for total water mixing ratio variance indicates that moisture stratification plays a leading role in setting horizontal scales. This result is supported by simulations in which horizontal mean thermodynamic profiles are strongly nudged to their initial well-mixed state, which limits cloud scales. It is found that the spatial variability of subcloud moist cold pools surprisingly tends to respond to, rather than determine, themore » mesoscale variability, which may distinguish them from dry cold pools associated with deeper convection. Finally, simulations also indicate that moisture stratification increases cloud scales specifically by increasing latent heating within updrafts, which increases updraft buoyancy and favors greater horizontal scales.« less

  2. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Treesearch

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  3. Simulating neural systems with Xyce.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiek, Richard Louis; Thornquist, Heidi K.; Mei, Ting

    2012-12-01

    Sandias parallel circuit simulator, Xyce, can address large scale neuron simulations in a new way extending the range within which one can perform high-fidelity, multi-compartment neuron simulations. This report documents the implementation of neuron devices in Xyce, their use in simulation and analysis of neuron systems.

  4. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    USDA-ARS?s Scientific Manuscript database

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  5. Development of test methods for scale model simulation of aerial applications in the NASA Langley Vortex Research Facility. [agricultural aircraft

    NASA Technical Reports Server (NTRS)

    Jordan, F. L., Jr.

    1980-01-01

    As part of basic research to improve aerial applications technology, methods were developed at the Langley Vortex Research Facility to simulate and measure deposition patterns of aerially-applied sprays and granular materials by means of tests with small-scale models of agricultural aircraft and dynamically-scaled test particles. Interactions between the aircraft wake and the dispersed particles are being studied with the objective of modifying wake characteristics and dispersal techniques to increase swath width, improve deposition pattern uniformity, and minimize drift. The particle scaling analysis, test methods for particle dispersal from the model aircraft, visualization of particle trajectories, and measurement and computer analysis of test deposition patterns are described. An experimental validation of the scaling analysis and test results that indicate improved control of chemical drift by use of winglets are presented to demonstrate test methods.

  6. New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations

    NASA Technical Reports Server (NTRS)

    Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.

    2012-01-01

    In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.

  7. In situ and in-transit analysis of cosmological simulations

    DOE PAGES

    Friesen, Brian; Almgren, Ann; Lukic, Zarija; ...

    2016-08-24

    Modern cosmological simulations have reached the trillion-element scale, rendering data storage and subsequent analysis formidable tasks. To address this circumstance, we present a new MPI-parallel approach for analysis of simulation data while the simulation runs, as an alternative to the traditional workflow consisting of periodically saving large data sets to disk for subsequent ‘offline’ analysis. We demonstrate this approach in the compressible gasdynamics/N-body code Nyx, a hybrid MPI+OpenMP code based on the BoxLib framework, used for large-scale cosmological simulations. We have enabled on-the-fly workflows in two different ways: one is a straightforward approach consisting of all MPI processes periodically haltingmore » the main simulation and analyzing each component of data that they own (‘ in situ’). The other consists of partitioning processes into disjoint MPI groups, with one performing the simulation and periodically sending data to the other ‘sidecar’ group, which post-processes it while the simulation continues (‘in-transit’). The two groups execute their tasks asynchronously, stopping only to synchronize when a new set of simulation data needs to be analyzed. For both the in situ and in-transit approaches, we experiment with two different analysis suites with distinct performance behavior: one which finds dark matter halos in the simulation using merge trees to calculate the mass contained within iso-density contours, and another which calculates probability distribution functions and power spectra of various fields in the simulation. Both are common analysis tasks for cosmology, and both result in summary statistics significantly smaller than the original data set. We study the behavior of each type of analysis in each workflow in order to determine the optimal configuration for the different data analysis algorithms.« less

  8. TURBULENCE AND PROTON–ELECTRON HEATING IN KINETIC PLASMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthaeus, William H; Parashar, Tulasi N; Wu, P.

    2016-08-10

    Analysis of particle-in-cell simulations of kinetic plasma turbulence reveals a connection between the strength of cascade, the total heating rate, and the partitioning of dissipated energy into proton heating and electron heating. A von Karman scaling of the cascade rate explains the total heating across several families of simulations. The proton to electron heating ratio increases in proportion to total heating. We argue that the ratio of gyroperiod to nonlinear turnover time at the ion kinetic scales controls the ratio of proton and electron heating. The proposed scaling is consistent with simulations.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzi, Silvio; Hereld, Mark; Insley, Joseph

    In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less

  10. Urban Flow and Pollutant Dispersion Simulation with Multi-scale coupling of Meteorological Model with Computational Fluid Dynamic Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Yushi; Poh, Hee Joo

    2014-11-01

    The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.

  11. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  12. Prediction of Gas Injection Performance for Heterogeneous Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blunt, Martin J.; Orr, Franklin M.

    This report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1997 - September 1998 under the second year of a three-year grant from the Department of Energy on the "Prediction of Gas Injection Performance for Heterogeneous Reservoirs." The research effort is an integrated study of the factors affecting gas injection, from the pore scale to the field scale, and involves theoretical analysis, laboratory experiments, and numerical simulation. The original proposal described research in four areas: (1) Pore scale modeling of three phase flow in porous media; (2) Laboratory experiments and analysis of factorsmore » influencing gas injection performance at the core scale with an emphasis on the fundamentals of three phase flow; (3) Benchmark simulations of gas injection at the field scale; and (4) Development of streamline-based reservoir simulator. Each state of the research is planned to provide input and insight into the next stage, such that at the end we should have an integrated understanding of the key factors affecting field scale displacements.« less

  13. Pedestrian simulation and distribution in urban space based on visibility analysis and agent simulation

    NASA Astrophysics Data System (ADS)

    Ying, Shen; Li, Lin; Gao, Yurong

    2009-10-01

    Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.

  14. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  15. Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak

    2015-04-01

    Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.

  16. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2016-01-05

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  17. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  18. T-cell epitope prediction and immune complex simulation using molecular dynamics: state of the art and persisting challenges

    PubMed Central

    2010-01-01

    Atomistic Molecular Dynamics provides powerful and flexible tools for the prediction and analysis of molecular and macromolecular systems. Specifically, it provides a means by which we can measure theoretically that which cannot be measured experimentally: the dynamic time-evolution of complex systems comprising atoms and molecules. It is particularly suitable for the simulation and analysis of the otherwise inaccessible details of MHC-peptide interaction and, on a larger scale, the simulation of the immune synapse. Progress has been relatively tentative yet the emergence of truly high-performance computing and the development of coarse-grained simulation now offers us the hope of accurately predicting thermodynamic parameters and of simulating not merely a handful of proteins but larger, longer simulations comprising thousands of protein molecules and the cellular scale structures they form. We exemplify this within the context of immunoinformatics. PMID:21067546

  19. Finite Element Simulation of the Shear Effect of Ultrasonic on Heat Exchanger Descaling

    NASA Astrophysics Data System (ADS)

    Lu, Shaolv; Wang, Zhihua; Wang, Hehui

    2018-03-01

    The shear effect on the interface of metal plate and its attached scale is an important mechanism of ultrasonic descaling, which is caused by the different propagation speed of ultrasonic wave in two different mediums. The propagating of ultrasonic wave on the shell is simulated based on the ANSYS/LS-DYNA explicit dynamic analysis. The distribution of shear stress in different paths under ultrasonic vibration is obtained through the finite element analysis and it reveals the main descaling mechanism of shear effect. The simulation result is helpful and enlightening to the reasonable design and the application of the ultrasonic scaling technology on heat exchanger.

  20. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  1. Analysis and optimization of gyrokinetic toroidal simulations on homogenous and heterogenous platforms

    DOE PAGES

    Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...

    2013-07-18

    The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.

  2. Web-based Visual Analytics for Extreme Scale Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Evans, Katherine J; Harney, John F

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less

  3. Sequential use of simulation and optimization in analysis and planning

    Treesearch

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  4. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  5. Perspective: Markov models for long-timescale biomolecular dynamics.

    PubMed

    Schwantes, C R; McGibbon, R T; Pande, V S

    2014-09-07

    Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics.

  6. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.

  7. Simulation of nitrate reduction in groundwater - An upscaling approach from small catchments to the Baltic Sea basin

    NASA Astrophysics Data System (ADS)

    Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.

    2018-01-01

    This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.

  8. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less

  9. Computational analysis of fluid dynamics in pharmaceutical freeze-drying.

    PubMed

    Alexeenko, Alina A; Ganguly, Arnab; Nail, Steven L

    2009-09-01

    Analysis of water vapor flows encountered in pharmaceutical freeze-drying systems, laboratory-scale and industrial, is presented based on the computational fluid dynamics (CFD) techniques. The flows under continuum gas conditions are analyzed using the solution of the Navier-Stokes equations whereas the rarefied flow solutions are obtained by the direct simulation Monte Carlo (DSMC) method for the Boltzmann equation. Examples of application of CFD techniques to laboratory-scale and industrial scale freeze-drying processes are discussed with an emphasis on the utility of CFD for improvement of design and experimental characterization of pharmaceutical freeze-drying hardware and processes. The current article presents a two-dimensional simulation of a laboratory scale dryer with an emphasis on the importance of drying conditions and hardware design on process control and a three-dimensional simulation of an industrial dryer containing a comparison of the obtained results with analytical viscous flow solutions. It was found that the presence of clean in place (CIP)/sterilize in place (SIP) piping in the duct lead to significant changes in the flow field characteristics. The simulation results for vapor flow rates in an industrial freeze-dryer have been compared to tunable diode laser absorption spectroscopy (TDLAS) and gravimetric measurements.

  10. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  11. WEST-3 wind turbine simulator development

    NASA Technical Reports Server (NTRS)

    Hoffman, J. A.; Sridhar, S.

    1985-01-01

    The software developed for WEST-3, a new, all digital, and fully programmable wind turbine simulator is given. The process of wind turbine simulation on WEST-3 is described in detail. The major steps are, the processing of the mathematical models, the preparation of the constant data, and the use of system software generated executable code for running on WEST-3. The mechanics of reformulation, normalization, and scaling of the mathematical models is discussed in detail, in particulr, the significance of reformulation which leads to accurate simulations. Descriptions for the preprocessor computer programs which are used to prepare the constant data needed in the simulation are given. These programs, in addition to scaling and normalizing all the constants, relieve the user from having to generate a large number of constants used in the simulation. Also given are brief descriptions of the components of the WEST-3 system software: Translator, Assembler, Linker, and Loader. Also included are: details of the aeroelastic rotor analysis, which is the center of a wind turbine simulation model, analysis of the gimbal subsystem; and listings of the variables, constants, and equations used in the simulation.

  12. Weakened Magnetization and Onset of Large-scale Turbulence in the Young Solar Wind—Comparisons of Remote Sensing Observations with Simulation

    NASA Astrophysics Data System (ADS)

    Chhiber, Rohit; Usmanov, Arcadi V.; DeForest, Craig E.; Matthaeus, William H.; Parashar, Tulasi N.; Goldstein, Melvyn L.

    2018-04-01

    Recent analysis of Solar-Terrestrial Relations Observatory (STEREO) imaging observations have described the early stages of the development of turbulence in the young solar wind in solar minimum conditions. Here we extend this analysis to a global magnetohydrodynamic (MHD) simulation of the corona and solar wind based on inner boundary conditions, either dipole or magnetogram type, that emulate solar minimum. The simulations have been calibrated using Ulysses and 1 au observations, and allow, within a well-understood context, a precise determination of the location of the Alfvén critical surfaces and the first plasma beta equals unity surfaces. The compatibility of the the STEREO observations and the simulations is revealed by direct comparisons. Computation of the radial evolution of second-order magnetic field structure functions in the simulations indicates a shift toward more isotropic conditions at scales of a few Gm, as seen in the STEREO observations in the range 40–60 R ⊙. We affirm that the isotropization occurs in the vicinity of the first beta unity surface. The interpretation based on early stages of in situ solar wind turbulence evolution is further elaborated, emphasizing the relationship of the observed length scales to the much smaller scales that eventually become the familiar turbulence inertial range cascade. We argue that the observed dynamics is the very early manifestation of large-scale in situ nonlinear couplings that drive turbulence and heating in the solar wind.

  13. Equation-free multiscale computation: algorithms and applications.

    PubMed

    Kevrekidis, Ioannis G; Samaey, Giovanni

    2009-01-01

    In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.

  14. A qualitative analysis of bus simulator training on transit incidents : a case study in Florida. [Summary].

    DOT National Transportation Integrated Search

    2013-01-01

    The simulator was once a very expensive, large-scale mechanical device for training military pilots or astronauts. Modern computers, linking sophisticated software and large-screen displays, have yielded simulators for the desktop or configured as sm...

  15. DEVELOPMENT AND ANALYSIS OF AIR QUALITY MODELING SIMULATIONS FOR HAZARDOUS AIR POLLUTANTS

    EPA Science Inventory

    The concentrations of five hazardous air pollutants were simulated using the Community Multi Scale Air Quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results a...

  16. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  17. Simulation of parametric model towards the fixed covariate of right censored lung cancer data

    NASA Astrophysics Data System (ADS)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila

    2017-09-01

    In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.

  18. Multifractal evaluation of simulated precipitation intensities from the COSMO NWP model

    NASA Astrophysics Data System (ADS)

    Wolfensberger, Daniel; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Berne, Alexis

    2017-12-01

    The framework of universal multifractals (UM) characterizes the spatio-temporal variability in geophysical data over a wide range of scales with only a limited number of scale-invariant parameters. This work aims to clarify the link between multifractals (MFs) and more conventional weather descriptors and to show how they can be used to perform a multi-scale evaluation of model data. The first part of this work focuses on a MF analysis of the climatology of precipitation intensities simulated by the COSMO numerical weather prediction model. Analysis of the spatial structure of the MF parameters, and their correlations with external meteorological and topographical descriptors, reveals that simulated precipitation tends to be smoother at higher altitudes, and that the mean intermittency is mostly influenced by the latitude. A hierarchical clustering was performed on the external descriptors, yielding three different clusters, which correspond roughly to Alpine/continental, Mediterranean and temperate regions. Distributions of MF parameters within these three clusters are shown to be statistically significantly different, indicating that the MF signature of rain is indeed geographically dependent. The second part of this work is event-based and focuses on the smaller scales. The MF parameters of precipitation intensities at the ground are compared with those obtained from the Swiss radar composite during three events corresponding to typical synoptic conditions over Switzerland. The results of this analysis show that the COSMO simulations exhibit spatial scaling breaks that are not present in the radar data, indicating that the model is not able to simulate the observed variability at all scales. A comparison of the operational one-moment microphysical parameterization scheme of COSMO with a more advanced two-moment scheme reveals that, while no scheme systematically outperforms the other, the two-moment scheme tends to produce larger extreme values and more discontinuous precipitation fields, which agree better with the radar composite.

  19. Comparative performance of different scale-down simulators of substrate gradients in Penicillium chrysogenum cultures: the need of a biological systems response analysis.

    PubMed

    Wang, Guan; Zhao, Junfei; Haringa, Cees; Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang; Deshmukh, Amit T; van Gulik, Walter; Heijnen, Joseph J; Noorman, Henk J

    2018-05-01

    In a 54 m 3 large-scale penicillin fermentor, the cells experience substrate gradient cycles at the timescales of global mixing time about 20-40 s. Here, we used an intermittent feeding regime (IFR) and a two-compartment reactor (TCR) to mimic these substrate gradients at laboratory-scale continuous cultures. The IFR was applied to simulate substrate dynamics experienced by the cells at full scale at timescales of tens of seconds to minutes (30 s, 3 min and 6 min), while the TCR was designed to simulate substrate gradients at an applied mean residence time (τc) of 6 min. A biological systems analysis of the response of an industrial high-yielding P. chrysogenum strain has been performed in these continuous cultures. Compared to an undisturbed continuous feeding regime in a single reactor, the penicillin productivity (q PenG ) was reduced in all scale-down simulators. The dynamic metabolomics data indicated that in the IFRs, the cells accumulated high levels of the central metabolites during the feast phase to actively cope with external substrate deprivation during the famine phase. In contrast, in the TCR system, the storage pool (e.g. mannitol and arabitol) constituted a large contribution of carbon supply in the non-feed compartment. Further, transcript analysis revealed that all scale-down simulators gave different expression levels of the glucose/hexose transporter genes and the penicillin gene clusters. The results showed that q PenG did not correlate well with exposure to the substrate regimes (excess, limitation and starvation), but there was a clear inverse relation between q PenG and the intracellular glucose level. © 2018 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  20. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  1. Remote visualization and scale analysis of large turbulence datatsets

    NASA Astrophysics Data System (ADS)

    Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.

    2015-12-01

    Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.

  2. Northwest Trajectory Analysis Capability: A Platform for Enhancing Computational Biophysics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Elena S.; Stephan, Eric G.; Corrigan, Abigail L.

    2008-07-30

    As computational resources continue to increase, the ability of computational simulations to effectively complement, and in some cases replace, experimentation in scientific exploration also increases. Today, large-scale simulations are recognized as an effective tool for scientific exploration in many disciplines including chemistry and biology. A natural side effect of this trend has been the need for an increasingly complex analytical environment. In this paper, we describe Northwest Trajectory Analysis Capability (NTRAC), an analytical software suite developed to enhance the efficiency of computational biophysics analyses. Our strategy is to layer higher-level services and introduce improved tools within the user’s familiar environmentmore » without preventing researchers from using traditional tools and methods. Our desire is to share these experiences to serve as an example for effectively analyzing data intensive large scale simulation data.« less

  3. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses.

    PubMed

    Franklin, Ashley E; Burns, Paulette; Lee, Christopher S

    2014-10-01

    In 2006, the National League for Nursing published three measures related to novice nurses' beliefs about self-confidence, scenario design, and educational practices associated with simulation. Despite the extensive use of these measures, little is known about their reliability and validity. The psychometric properties of the Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire were studied among a sample of 2200 surveys completed by novice nurses from a liberal arts university in the southern United States. Psychometric tests included item analysis, confirmatory and exploratory factor analyses in randomly-split subsamples, concordant and discordant validity, and internal consistency. All three measures have sufficient reliability and validity to be used in education research. There is room for improvement in content validity with the Student Satisfaction and Self-Confidence in Learning and Simulation Design Scale. This work provides robust evidence to ensure that judgments made about self-confidence after simulation, simulation design and educational practices are valid and reliable. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Performance of Renormalization Group Algebraic Turbulence Model on Boundary Layer Transition Simulation

    NASA Technical Reports Server (NTRS)

    Ahn, Kyung H.

    1994-01-01

    The RNG-based algebraic turbulence model, with a new method of solving the cubic equation and applying new length scales, is introduced. An analysis is made of the RNG length scale which was previously reported and the resulting eddy viscosity is compared with those from other algebraic turbulence models. Subsequently, a new length scale is introduced which actually uses the two previous RNG length scales in a systematic way to improve the model performance. The performance of the present RNG model is demonstrated by simulating the boundary layer flow over a flat plate and the flow over an airfoil.

  5. Modeling Framework for Fracture in Multiscale Cement-Based Material Structures

    PubMed Central

    Qian, Zhiwei; Schlangen, Erik; Ye, Guang; van Breugel, Klaas

    2017-01-01

    Multiscale modeling for cement-based materials, such as concrete, is a relatively young subject, but there are already a number of different approaches to study different aspects of these classical materials. In this paper, the parameter-passing multiscale modeling scheme is established and applied to address the multiscale modeling problem for the integrated system of cement paste, mortar, and concrete. The block-by-block technique is employed to solve the length scale overlap challenge between the mortar level (0.1–10 mm) and the concrete level (1–40 mm). The microstructures of cement paste are simulated by the HYMOSTRUC3D model, and the material structures of mortar and concrete are simulated by the Anm material model. Afterwards the 3D lattice fracture model is used to evaluate their mechanical performance by simulating a uniaxial tensile test. The simulated output properties at a lower scale are passed to the next higher scale to serve as input local properties. A three-level multiscale lattice fracture analysis is demonstrated, including cement paste at the micrometer scale, mortar at the millimeter scale, and concrete at centimeter scale. PMID:28772948

  6. A reduced basis method for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Vincent-Finley, Rachel Elisabeth

    In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.

  7. Inventory-based landscape-scale simulation of management effectiveness and economic feasibility with BioSum

    Treesearch

    Jeremy S. Fried; Larry D. Potts; Sara M. Loreno; Glenn A. Christensen; R. Jamie Barbour

    2017-01-01

    The Forest Inventory and Analysis (FIA)-based BioSum (Bioregional Inventory Originated Simulation Under Management) is a free policy analysis framework and workflow management software solution. It addresses complex management questions concerning forest health and vulnerability for large, multimillion acre, multiowner landscapes using FIA plot data as the initial...

  8. Thermo-Oxidative Induced Damage in Polymer Composites: Microstructure Image-Based Multi-Scale Modeling and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Hussein, Rafid M.; Chandrashekhara, K.

    2017-11-01

    A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.

  9. Scale dependent inference in landscape genetics

    Treesearch

    Samuel A. Cushman; Erin L. Landguth

    2010-01-01

    Ecological relationships between patterns and processes are highly scale dependent. This paper reports the first formal exploration of how changing scale of research away from the scale of the processes governing gene flow affects the results of landscape genetic analysis. We used an individual-based, spatially explicit simulation model to generate patterns of genetic...

  10. Numerical Simulations of Subscale Wind Turbine Rotor Inboard Airfoils at Low Reynolds Number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaylock, Myra L.; Maniaci, David Charles; Resor, Brian R.

    2015-04-01

    New blade designs are planned to support future research campaigns at the SWiFT facility in Lubbock, Texas. The sub-scale blades will reproduce specific aerodynamic characteristics of utility-scale rotors. Reynolds numbers for megawatt-, utility-scale rotors are generally above 2-8 million. The thickness of inboard airfoils for these large rotors are typically as high as 35-40%. The thickness and the proximity to three-dimensional flow of these airfoils present design and analysis challenges, even at the full scale. However, more than a decade of experience with the airfoils in numerical simulation, in the wind tunnel, and in the field has generated confidence inmore » their performance. Reynolds number regimes for the sub-scale rotor are significantly lower for the inboard blade, ranging from 0.7 to 1 million. Performance of the thick airfoils in this regime is uncertain because of the lack of wind tunnel data and the inherent challenge associated with numerical simulations. This report documents efforts to determine the most capable analysis tools to support these simulations in an effort to improve understanding of the aerodynamic properties of thick airfoils in this Reynolds number regime. Numerical results from various codes of four airfoils are verified against previously published wind tunnel results where data at those Reynolds numbers are available. Results are then computed for other Reynolds numbers of interest.« less

  11. Pore-Scale Simulation and Sensitivity Analysis of Apparent Gas Permeability in Shale Matrix

    PubMed Central

    Zhang, Pengwei; Hu, Liming; Meegoda, Jay N.

    2017-01-01

    Extremely low permeability due to nano-scale pores is a distinctive feature of gas transport in a shale matrix. The permeability of shale depends on pore pressure, porosity, pore throat size and gas type. The pore network model is a practical way to explain the macro flow behavior of porous media from a microscopic point of view. In this research, gas flow in a shale matrix is simulated using a previously developed three-dimensional pore network model that includes typical bimodal pore size distribution, anisotropy and low connectivity of the pore structure in shale. The apparent gas permeability of shale matrix was calculated under different reservoir pressures corresponding to different gas exploitation stages. Results indicate that gas permeability is strongly related to reservoir gas pressure, and hence the apparent permeability is not a unique value during the shale gas exploitation, and simulations suggested that a constant permeability for continuum-scale simulation is not accurate. Hence, the reservoir pressures of different shale gas exploitations should be considered. In addition, a sensitivity analysis was also performed to determine the contributions to apparent permeability of a shale matrix from petro-physical properties of shale such as pore throat size and porosity. Finally, the impact of connectivity of nano-scale pores on shale gas flux was analyzed. These results would provide an insight into understanding nano/micro scale flows of shale gas in the shale matrix. PMID:28772465

  12. Pore-Scale Simulation and Sensitivity Analysis of Apparent Gas Permeability in Shale Matrix.

    PubMed

    Zhang, Pengwei; Hu, Liming; Meegoda, Jay N

    2017-01-25

    Extremely low permeability due to nano-scale pores is a distinctive feature of gas transport in a shale matrix. The permeability of shale depends on pore pressure, porosity, pore throat size and gas type. The pore network model is a practical way to explain the macro flow behavior of porous media from a microscopic point of view. In this research, gas flow in a shale matrix is simulated using a previously developed three-dimensional pore network model that includes typical bimodal pore size distribution, anisotropy and low connectivity of the pore structure in shale. The apparent gas permeability of shale matrix was calculated under different reservoir pressures corresponding to different gas exploitation stages. Results indicate that gas permeability is strongly related to reservoir gas pressure, and hence the apparent permeability is not a unique value during the shale gas exploitation, and simulations suggested that a constant permeability for continuum-scale simulation is not accurate. Hence, the reservoir pressures of different shale gas exploitations should be considered. In addition, a sensitivity analysis was also performed to determine the contributions to apparent permeability of a shale matrix from petro-physical properties of shale such as pore throat size and porosity. Finally, the impact of connectivity of nano-scale pores on shale gas flux was analyzed. These results would provide an insight into understanding nano/micro scale flows of shale gas in the shale matrix.

  13. Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.

    2002-01-01

    Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.

  14. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  15. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  16. The development of an industrial-scale fed-batch fermentation simulation.

    PubMed

    Goldrick, Stephen; Ştefan, Andrei; Lovett, David; Montague, Gary; Lennox, Barry

    2015-01-10

    This paper describes a simulation of an industrial-scale fed-batch fermentation that can be used as a benchmark in process systems analysis and control studies. The simulation was developed using a mechanistic model and validated using historical data collected from an industrial-scale penicillin fermentation process. Each batch was carried out in a 100,000 L bioreactor that used an industrial strain of Penicillium chrysogenum. The manipulated variables recorded during each batch were used as inputs to the simulator and the predicted outputs were then compared with the on-line and off-line measurements recorded in the real process. The simulator adapted a previously published structured model to describe the penicillin fermentation and extended it to include the main environmental effects of dissolved oxygen, viscosity, temperature, pH and dissolved carbon dioxide. In addition the effects of nitrogen and phenylacetic acid concentrations on the biomass and penicillin production rates were also included. The simulated model predictions of all the on-line and off-line process measurements, including the off-gas analysis, were in good agreement with the batch records. The simulator and industrial process data are available to download at www.industrialpenicillinsimulation.com and can be used to evaluate, study and improve on the current control strategy implemented on this facility. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  17. Mesoscale Thermodynamic Analysis of Atomic-Scale Dislocation-Obstacle Interactions Simulated by Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monet, Giath; Bacon, David J; Osetskiy, Yury N

    2010-01-01

    Given the time and length scales in molecular dynamics (MD) simulations of dislocation-defect interactions, quantitative MD results cannot be used directly in larger scale simulations or compared directly with experiment. A method to extract fundamental quantities from MD simulations is proposed here. The first quantity is a critical stress defined to characterise the obstacle resistance. This mesoscopic parameter, rather than the obstacle 'strength' designed for a point obstacle, is to be used for an obstacle of finite size. At finite temperature, our analyses of MD simulations allow the activation energy to be determined as a function of temperature. The resultsmore » confirm the proportionality between activation energy and temperature that is frequently observed by experiment. By coupling the data for the activation energy and the critical stress as functions of temperature, we show how the activation energy can be deduced at a given value of the critical stress.« less

  18. Railway bogie vibration analysis by mathematical simulation model and a scaled four-wheel railway bogie set

    NASA Astrophysics Data System (ADS)

    Visayataksin, Noppharat; Sooklamai, Manon

    2018-01-01

    The bogie is the part that connects and transfers all the load from the vehicle body onto the railway track; interestingly the interaction between wheels and rails is the critical point for derailment of the rail vehicles. However, observing or experimenting with real bogies on rail vehicles is impossible due to the operational rules and safety concerns. Therefore, this research aimed to develop a vibration analysis set for a four-wheel railway bogie by constructing a four-wheel bogie with scale of 1:4.5. The bogie structures, including wheels and axles, were made from an aluminium alloy, equipped with springs and dampers. The bogie was driven by an electric motor using 4 round wheels instead of 2 straight rails, with linear velocity between 0 to 11.22 m/s. The data collected from the vibration analysis set was compared to the mathematical simulation model to investigate the vibration behavior of the bogie, especially the hunting motion. The results showed that vibration behavior from a scaled four-wheel railway bogie set significantly agreed with the mathematical simulation model in terms of displacement and hunting frequency. The critical speed of the wheelset was found by executing the mathematical simulation model at 13 m/s.

  19. Spiral-arm instability: giant clump formation via fragmentation of a galactic spiral arm

    NASA Astrophysics Data System (ADS)

    Inoue, Shigeki; Yoshida, Naoki

    2018-03-01

    Fragmentation of a spiral arm is thought to drive the formation of giant clumps in galaxies. Using linear perturbation analysis for self-gravitating spiral arms, we derive an instability parameter and define the conditions for clump formation. We extend our analysis to multicomponent systems that consist of gas and stars in an external potential. We then perform numerical simulations of isolated disc galaxies with isothermal gas, and compare the results with the prediction of our analytic model. Our model describes accurately the evolution of the spiral arms in our simulations, even when spiral arms dynamically interact with one another. We show that most of the giant clumps formed in the simulated disc galaxies satisfy the instability condition. The clump masses predicted by our model are in agreement with the simulation results, but the growth time-scale of unstable perturbations is overestimated by a factor of a few. We also apply our instability analysis to derive scaling relations of clump properties. The expected scaling relation between the clump size, velocity dispersion, and circular velocity is slightly different from that given by the Toomre instability analyses, but neither is inconsistent with currently available observations. We argue that the spiral-arm instability is a viable formation mechanism of giant clumps in gas-rich disc galaxies.

  20. Mercury and methylmercury stream concentrations in a Coastal Plain watershed: A multi-scale simulation analysis

    EPA Science Inventory

    Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale me...

  1. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-02-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  2. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-05-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  3. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP; Soares, Thereza A.

    2007-12-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  4. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP

    2008-03-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  5. Grand Minima and Equatorward Propagation in a Cycling Stellar Convective Dynamo

    NASA Astrophysics Data System (ADS)

    Augustson, Kyle C.; Brun, Allan Sacha; Miesch, Mark; Toomre, Juri

    2015-08-01

    The 3-D magnetohydrodynamic (MHD) Anelastic Spherical Harmonic (ASH) code, using slope-limited diffusion, is employed to capture convective and dynamo processes achieved in a global-scale stellar convection simulation for a model solar-mass star rotating at three times the solar rate. The dynamo generated magnetic fields possesses many time scales, with a prominent polarity cycle occurring roughly every 6.2 years. The magnetic field forms large-scale toroidal wreaths, whose formation is tied to the low Rossby number of the convection in this simulation. The polarity reversals are linked to the weakened differential rotation and a resistive collapse of the large-scale magnetic field. An equatorial migration of the magnetic field is seen, which is due to the strong modulation of the differential rotation rather than a dynamo wave. A poleward migration of magnetic flux from the equator eventually leads to the reversal of the polarity of the high-latitude magnetic field. This simulation also enters an interval with reduced magnetic energy at low latitudes lasting roughly 16 years (about 2.5 polarity cycles), during which the polarity cycles are disrupted and after which the dynamo recovers its regular polarity cycles. An analysis of this grand minimum reveals that it likely arises through the interplay of symmetric and antisymmetric dynamo families. This intermittent dynamo state potentially results from the simulations relatively low magnetic Prandtl number. A mean-field-based analysis of this dynamo simulation demonstrates that it is of the α-Ω type. The time scales that appear to be relevant to the magnetic polarity reversal are also identified.

  6. Numerical simulation on hydromechanical coupling in porous media adopting three-dimensional pore-scale model.

    PubMed

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view.

  7. Numerical Simulation on Hydromechanical Coupling in Porous Media Adopting Three-Dimensional Pore-Scale Model

    PubMed Central

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384

  8. Survey of factors influencing learner engagement with simulation debriefing among nursing students.

    PubMed

    Roh, Young Sook; Jang, Kie In

    2017-12-01

    Simulation-based education has escalated worldwide, yet few studies have rigorously explored predictors of learner engagement with simulation debriefing. The purpose of this cross-sectional, descriptive survey was to identify factors that determine learner engagement with simulation debriefing among nursing students. A convenience sample of 296 Korean nursing students enrolled in the simulation-based course completed the survey. A total of five instruments were used: (i) Characteristics of Debriefing; (ii) Debriefing Assessment for Simulation in Healthcare - Student Version; (iii) The Korean version of the Simulation Design Scale; (iv) Communication Skills Scale; and (v) Clinical-Based Stress Scale. Multiple regression analysis was performed using the variables to investigate the influencing factors. The results indicated that influencing factors of learning engagement with simulation debriefing were simulation design, confidentiality, stress, and number of students. Simulation design was the most important factor. Video-assisted debriefing was not a significant factor affecting learner engagement. Educators should organize and conduct debriefing activities while considering these factors to effectively induce learner engagement. Further study is needed to identify the effects of debriefing sessions targeting learners' needs and considering situational factors on learning outcomes. © 2017 John Wiley & Sons Australia, Ltd.

  9. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    NASA Astrophysics Data System (ADS)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  10. Freud: a software suite for high-throughput simulation analysis

    NASA Astrophysics Data System (ADS)

    Harper, Eric; Spellings, Matthew; Anderson, Joshua; Glotzer, Sharon

    Computer simulation is an indispensable tool for the study of a wide variety of systems. As simulations scale to fill petascale and exascale supercomputing clusters, so too does the size of the data produced, as well as the difficulty in analyzing these data. We present Freud, an analysis software suite for efficient analysis of simulation data. Freud makes no assumptions about the system being analyzed, allowing for general analysis methods to be applied to nearly any type of simulation. Freud includes standard analysis methods such as the radial distribution function, as well as new methods including the potential of mean force and torque and local crystal environment analysis. Freud combines a Python interface with fast, parallel C + + analysis routines to run efficiently on laptops, workstations, and supercomputing clusters. Data analysis on clusters reduces data transfer requirements, a prohibitive cost for petascale computing. Used in conjunction with simulation software, Freud allows for smart simulations that adapt to the current state of the system, enabling the study of phenomena such as nucleation and growth, intelligent investigation of phases and phase transitions, and determination of effective pair potentials.

  11. A Proposal of Monitoring and Forecasting Method for Crustal Activity in and around Japan with 3-dimensional Heterogeneous Medium Using a Large-scale High-fidelity Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Hori, T.; Agata, R.; Ichimura, T.; Fujita, K.; Yamaguchi, T.; Takahashi, N.

    2017-12-01

    Recently, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. For construct a system for monitoring and forecasting, it is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate inter-face and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Unstructured FE non-linear seismic wave simulation code has been developed. This achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. A high fidelity FEM simulation code with mesh generator has also been developed to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. This code has been improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, waveform inversion code for modeling 3D crustal structure has been developed, and the high-fidelity FEM code has been improved to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. We are developing the methods for forecasting the slip velocity variation on the plate interface. Although the prototype is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model. Furthermore, large-scale simulation codes for monitoring are being implemented on the GPU clusters and analysis tools are developing to include other functions such as examination in model errors.

  12. Development of carbon response trajectories using FIA plot data and FVS growth simulator: challenges of a large scale simulation project

    Treesearch

    James B. McCarter; Sean Healey

    2015-01-01

    The Forest Carbon Management Framework (ForCaMF) integrates Forest Inventory and Analysis (FIA) plot inventory data, disturbance histories, and carbon response trajectories to develop estimates of disturbance and management effects on carbon pools for the National Forest System. All appropriate FIA inventory plots are simulated using the Forest Vegetation Simulator (...

  13. Structure identification methods for atomistic simulations of crystalline materials

    DOE PAGES

    Stukowski, Alexander

    2012-05-28

    Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.

  14. Flow topologies and turbulence scales in a jet-in-cross-flow

    DOE PAGES

    Oefelein, Joseph C.; Ruiz, Anthony M.; Lacaze, Guilhem

    2015-04-03

    This study presents a detailed analysis of the flow topologies and turbulence scales in the jet-in-cross-flow experiment of [Su and Mungal JFM 2004]. The analysis is performed using the Large Eddy Simulation (LES) technique with a highly resolved grid and time-step and well controlled boundary conditions. This enables quantitative agreement with the first and second moments of turbulence statistics measured in the experiment. LES is used to perform the analysis since experimental measurements of time-resolved 3D fields are still in their infancy and because sampling periods are generally limited with direct numerical simulation. A major focal point is the comprehensivemore » characterization of the turbulence scales and their evolution. Time-resolved probes are used with long sampling periods to obtain maps of the integral scales, Taylor microscales, and turbulent kinetic energy spectra. Scalar-fluctuation scales are also quantified. In the near-field, coherent structures are clearly identified, both in physical and spectral space. Along the jet centerline, turbulence scales grow according to a classical one-third power law. However, the derived maps of turbulence scales reveal strong inhomogeneities in the flow. From the modeling perspective, these insights are useful to design optimized grids and improve numerical predictions in similar configurations.« less

  15. Tropospheric transport differences between models using the same large-scale meteorological fields

    NASA Astrophysics Data System (ADS)

    Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.

    2017-01-01

    The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.

  16. Airframe Icing Research Gaps: NASA Perspective

    NASA Technical Reports Server (NTRS)

    Potapczuk, Mark

    2009-01-01

    qCurrent Airframe Icing Technology Gaps: Development of a full 3D ice accretion simulation model. Development of an improved simulation model for SLD conditions. CFD modeling of stall behavior for ice-contaminated wings/tails. Computational methods for simulation of stability and control parameters. Analysis of thermal ice protection system performance. Quantification of 3D ice shape geometric characteristics Development of accurate ground-based simulation of SLD conditions. Development of scaling methods for SLD conditions. Development of advanced diagnostic techniques for assessment of tunnel cloud conditions. Identification of critical ice shapes for aerodynamic performance degradation. Aerodynamic scaling issues associated with testing scale model ice shape geometries. Development of altitude scaling methods for thermal ice protections systems. Development of accurate parameter identification methods. Measurement of stability and control parameters for an ice-contaminated swept wing aircraft. Creation of control law modifications to prevent loss of control during icing encounters. 3D ice shape geometries. Collection efficiency data for ice shape geometries. SLD ice shape data, in-flight and ground-based, for simulation verification. Aerodynamic performance data for 3D geometries and various icing conditions. Stability and control parameter data for iced aircraft configurations. Thermal ice protection system data for simulation validation.

  17. Large Eddy Simulation of Gravitational Effects on Transitional and Turbulent Gas-Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Jaberi, Farhad A.

    2001-01-01

    The basic objective of this work is to assess the influence of gravity on "the compositional and the spatial structures" of transitional and turbulent diffusion flames via large eddy simulation (LES), and direct numerical simulation (DNS). The DNS is conducted for appraisal of the various closures employed in LES, and to study the effect of buoyancy on the small scale flow features. The LES is based on our "filtered mass density function"' (FMDF) model. The novelty of the methodology is that it allows for reliable simulations with inclusion of "realistic physics." It also allows for detailed analysis of the unsteady large scale flow evolution and compositional flame structure which is not usually possible via Reynolds averaged simulations.

  18. Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model

    NASA Astrophysics Data System (ADS)

    Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.

    2017-12-01

    This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.

  19. Scalable streaming tools for analyzing N-body simulations: Finding halos and investigating excursion sets in one pass

    NASA Astrophysics Data System (ADS)

    Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.

    2018-04-01

    Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.

  20. epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.

    PubMed

    Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa

    2016-12-01

    Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  1. Improving the representation of clouds, radiation, and precipitation using spectral nudging in the Weather Research and Forecasting model

    NASA Astrophysics Data System (ADS)

    Spero, Tanya L.; Otte, Martin J.; Bowden, Jared H.; Nolte, Christopher G.

    2014-10-01

    Spectral nudging—a scale-selective interior constraint technique—is commonly used in regional climate models to maintain consistency with large-scale forcing while permitting mesoscale features to develop in the downscaled simulations. Several studies have demonstrated that spectral nudging improves the representation of regional climate in reanalysis-forced simulations compared with not using nudging in the interior of the domain. However, in the Weather Research and Forecasting (WRF) model, spectral nudging tends to produce degraded precipitation simulations when compared to analysis nudging—an interior constraint technique that is scale indiscriminate but also operates on moisture fields which until now could not be altered directly by spectral nudging. Since analysis nudging is less desirable for regional climate modeling because it dampens fine-scale variability, changes are proposed to the spectral nudging methodology to capitalize on differences between the nudging techniques and aim to improve the representation of clouds, radiation, and precipitation without compromising other fields. These changes include adding spectral nudging toward moisture, limiting nudging to below the tropopause, and increasing the nudging time scale for potential temperature, all of which collectively improve the representation of mean and extreme precipitation, 2 m temperature, clouds, and radiation, as demonstrated using a model-simulated 20 year historical period. Such improvements to WRF may increase the fidelity of regional climate data used to assess the potential impacts of climate change on human health and the environment and aid in climate change mitigation and adaptation studies.

  2. Exploring JWST's Capability to Constrain Habitability on Simulated Terrestrial TESS Planets

    NASA Astrophysics Data System (ADS)

    Tremblay, Luke; Britt, Amber; Batalha, Natasha; Schwieterman, Edward; Arney, Giada; Domagal-Goldman, Shawn; Mandell, Avi; Planetary Systems Laboratory; Virtual Planetary Laboratory

    2017-01-01

    In the following, we have worked to develop a flexible "observability" scale of biologically relevant molecules in the atmospheres of newly discovered exoplanets for the instruments aboard NASA's next flagship mission, the James Webb Space Telescope (JWST). We sought to create such a scale in order to provide the community with a tool with which to optimize target selection for JWST observations based on detections of the upcoming Transiting Exoplanet Satellite Survey (TESS). Current literature has laid the groundwork for defining both biologically relevant molecules as well as what characteristics would make a new world "habitable", but it has so far lacked a cohesive analysis of JWST's capabilities to observe these molecules in exoplanet atmospheres and thereby constrain habitability. In developing our Observability Scale, we utilized a range of hypothetical planets (over planetary radii and stellar insolation) and generated three self-consistent atmospheric models (of dierent molecular compositions) for each of our simulated planets. With these planets and their corresponding atmospheres, we utilized the most accurate JWST instrument simulator, created specically to process transiting exoplanet spectra. Through careful analysis of these simulated outputs, we were able to determine the relevant parameters that effected JWST's ability to constrain each individual molecular bands with statistical accuracy and therefore generate a scale based on those key parameters. As a preliminary test of our Observability Scale, we have also applied it to the list of TESS candidate stars in order to determine JWST's observational capabilities for any soon-to-be-detected planet in those solar systems.

  3. A new framework for the analysis of continental-scale convection-resolving climate simulations

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Charpilloz, C.; Arteaga, A.; Ban, N.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Schulthess, T. C.; Christoph, S.

    2017-12-01

    High-resolution climate simulations at horizontal resolution of O(1-4 km) allow explicit treatment of deep convection (thunderstorms and rain showers). Explicitly treating convection by the governing equations reduces uncertainties associated with parametrization schemes and allows a model formulation closer to physical first principles [1,2]. But kilometer-scale climate simulations with long integration periods and large computational domains are expensive and data storage becomes unbearably voluminous. Hence new approaches to perform analysis are required. In the crCLIM project we propose a new climate modeling framework that allows scientists to conduct analysis at high spatial and temporal resolution. We tackle the computational cost by using the largest available supercomputers such as hybrid CPU-GPU architectures. For this the COSMO model has been adapted to run on such architectures [2]. We then alleviate the I/O-bottleneck by employing a simulation data-virtualizer (SDaVi) that allows to trade-off storage (space) for computational effort (time). This is achieved by caching the simulation outputs and efficiently launching re-simulations in case of cache misses. All this is done transparently from the analysis applications [3]. For the re-runs this approach requires a bit-reproducible version of COSMO. That is to say a model that produces identical results on different architectures to ensure coherent recomputation of the requested data [4]. In this contribution we present a version of SDaVi, a first performance model, and a strategy to obtain bit-reproducibility across hardware architectures.[1] N. Ban, J. Schmidli, C. Schär. Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 7889-7907, 2014.[2] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, C. Schär. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19. Geosci. Model Dev, 3393-3412, 2016.[3] S. Di Girolamo, P. Schmid, T. Schulthess, T. Hoefler. Virtualized Big Data: Reproducing Simulation Output on Demand. Submit. to the 23rd ACM Symposium on PPoPP 18, Vienna, Austria.[4] A. Arteaga, O. Fuhrer, T. Hoefler. Designing Bit-Reproducible Portable High-Performance Applications. IEEE 28th IPDPS, 2014.

  4. The Contribution of Human Factors in Military System Development: Methodological Considerations

    DTIC Science & Technology

    1980-07-01

    Risk/Uncertainty Analysis - Project Scoring - Utility Scales - Relevance Tree Techniques (Reverse Factor Analysis) 2. Computer Simulation Simulation...effectiveness of mathematical models for R&D project selection. Management Science, April 1973, 18. 6-43 .1~ *.-. Souder, W.E. h scoring methodology for...per some interval PROFICIENCY test scores (written) RADIATION radiation effects aircrew performance on radiation environments REACTION TIME 1) (time

  5. Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick J.; Wang, Qiqi

    2018-02-01

    Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.

  6. Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Pohlmann, K.

    2016-12-01

    Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.

  7. Filtering analysis of a direct numerical simulation of the turbulent Rayleigh-Benard problem

    NASA Technical Reports Server (NTRS)

    Eidson, T. M.; Hussaini, M. Y.; Zang, T. A.

    1990-01-01

    A filtering analysis of a turbulent flow was developed which provides details of the path of the kinetic energy of the flow from its creation via thermal production to its dissipation. A low-pass spatial filter is used to split the velocity and the temperature field into a filtered component (composed mainly of scales larger than a specific size, nominally the filter width) and a fluctuation component (scales smaller than a specific size). Variables derived from these fields can fall into one of the above two ranges or be composed of a mixture of scales dominated by scales near the specific size. The filter is used to split the kinetic energy equation into three equations corresponding to the three scale ranges described above. The data from a direct simulation of the Rayleigh-Benard problem for conditions where the flow is turbulent are used to calculate the individual terms in the three kinetic energy equations. This is done for a range of filter widths. These results are used to study the spatial location and the scale range of the thermal energy production, the cascading of kinetic energy, the diffusion of kinetic energy, and the energy dissipation. These results are used to evaluate two subgrid models typically used in large-eddy simulations of turbulence. Subgrid models attempt to model the energy below the filter width that is removed by a low-pass filter.

  8. Regional climates in the GISS global circulation model - Synoptic-scale circulation

    NASA Technical Reports Server (NTRS)

    Hewitson, B.; Crane, R. G.

    1992-01-01

    A major weakness of current general circulation models (GCMs) is their perceived inability to predict reliably the regional consequences of a global-scale change, and it is these regional-scale predictions that are necessary for studies of human-environmental response. For large areas of the extratropics, the local climate is controlled by the synoptic-scale atmospheric circulation, and it is the purpose of this paper to evaluate the synoptic-scale circulation of the Goddard Institute for Space Studies (GISS) GCM. A methodology for validating the daily synoptic circulation using Principal Component Analysis is described, and the methodology is then applied to the GCM simulation of sea level pressure over the continental United States (excluding Alaska). The analysis demonstrates that the GISS 4 x 5 deg GCM Model II effectively simulates the synoptic-scale atmospheric circulation over the United States. The modes of variance describing the atmospheric circulation of the model are comparable to those found in the observed data, and these modes explain similar amounts of variance in their respective datasets. The temporal behavior of these circulation modes in the synoptic time frame are also comparable.

  9. Simulating Nationwide Pandemics: Applying the Multi-scale Epidemiologic Simulation and Analysis System to Human Infectious Diseases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dombroski, M; Melius, C; Edmunds, T

    2008-09-24

    This study uses the Multi-scale Epidemiologic Simulation and Analysis (MESA) system developed for foreign animal diseases to assess consequences of nationwide human infectious disease outbreaks. A literature review identified the state of the art in both small-scale regional models and large-scale nationwide models and characterized key aspects of a nationwide epidemiological model. The MESA system offers computational advantages over existing epidemiological models and enables a broader array of stochastic analyses of model runs to be conducted because of those computational advantages. However, it has only been demonstrated on foreign animal diseases. This paper applied the MESA modeling methodology to humanmore » epidemiology. The methodology divided 2000 US Census data at the census tract level into school-bound children, work-bound workers, elderly, and stay at home individuals. The model simulated mixing among these groups by incorporating schools, workplaces, households, and long-distance travel via airports. A baseline scenario with fixed input parameters was run for a nationwide influenza outbreak using relatively simple social distancing countermeasures. Analysis from the baseline scenario showed one of three possible results: (1) the outbreak burned itself out before it had a chance to spread regionally, (2) the outbreak spread regionally and lasted a relatively long time, although constrained geography enabled it to eventually be contained without affecting a disproportionately large number of people, or (3) the outbreak spread through air travel and lasted a long time with unconstrained geography, becoming a nationwide pandemic. These results are consistent with empirical influenza outbreak data. The results showed that simply scaling up a regional small-scale model is unlikely to account for all the complex variables and their interactions involved in a nationwide outbreak. There are several limitations of the methodology that should be explored in future work including validating the model against reliable historical disease data, improving contact rates, spread methods, and disease parameters through discussions with epidemiological experts, and incorporating realistic behavioral assumptions.« less

  10. Scaling of the velocity fluctuations in turbulent channels up to Reτ=2003

    NASA Astrophysics Data System (ADS)

    Hoyas, Sergio; Jiménez, Javier

    2006-01-01

    A new numerical simulation of a turbulent channel in a large box at Reτ=2003 is described and briefly compared with simulations at lower Reynolds numbers and with experiments. Some of the fluctuation intensities, especially the streamwise velocity, do not scale well in wall units, both near and away from the wall. Spectral analysis traces the near-wall scaling failure to the interaction of the logarithmic layer with the wall. The present statistics can be downloaded from http://torroja.dmt.upm.es/ftp/channels. Further ones will be added to the site as they become available.

  11. Results of Small-scale Solid Rocket Combustion Simulator testing at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Goldberg, Benjamin E.; Cook, Jerry

    1993-01-01

    The Small-scale Solid Rocket Combustion Simulator (SSRCS) program was established at the Marshall Space Flight Center (MSFC), and used a government/industry team consisting of Hercules Aerospace Corporation, Aerotherm Corporation, United Technology Chemical Systems Division, Thiokol Corporation and MSFC personnel to study the feasibility of simulating the combustion species, temperatures and flow fields of a conventional solid rocket motor (SRM) with a versatile simulator system. The SSRCS design is based on hybrid rocket motor principles. The simulator uses a solid fuel and a gaseous oxidizer. Verification of the feasibility of a SSRCS system as a test bed was completed using flow field and system analyses, as well as empirical test data. A total of 27 hot firings of a subscale SSRCS motor were conducted at MSFC. Testing of the Small-scale SSRCS program was completed in October 1992. This paper, a compilation of reports from the above team members and additional analysis of the instrumentation results, will discuss the final results of the analyses and test programs.

  12. Torsional Oscillations in a Global Solar Dynamo

    NASA Astrophysics Data System (ADS)

    Beaudoin, P.; Charbonneau, P.; Racine, E.; Smolarkiewicz, P. K.

    2013-02-01

    We characterize and analyze rotational torsional oscillations developing in a large-eddy magnetohydrodynamical simulation of solar convection (Ghizaru, Charbonneau, and Smolarkiewicz, Astrophys. J. Lett. 715, L133, 2010; Racine et al., Astrophys. J. 735, 46, 2011) producing an axisymmetric, large-scale, magnetic field undergoing periodic polarity reversals. Motivated by the many solar-like features exhibited by these oscillations, we carry out an analysis of the large-scale zonal dynamics. We demonstrate that simulated torsional oscillations are not driven primarily by the periodically varying large-scale magnetic torque, as one might have expected, but rather via the magnetic modulation of angular-momentum transport by the large-scale meridional flow. This result is confirmed by a straightforward energy analysis. We also detect a fairly sharp transition in rotational dynamics taking place as one moves from the base of the convecting layers to the base of the thin tachocline-like shear layer formed in the stably stratified fluid layers immediately below. We conclude by discussing the implications of our analyses with regard to the mechanism of amplitude saturation in the global dynamo operating in the simulation, and speculate on the possible precursor value of torsional oscillations for the forecast of solar-cycle characteristics.

  13. Searching for the right scale in catchment hydrology: the effect of soil spatial variability in simulated states and fluxes

    NASA Astrophysics Data System (ADS)

    Baroni, Gabriele; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine

    2017-04-01

    The advances in computer science and the availability of new detailed data-sets have led to a growing number of distributed hydrological models applied to finer and finer grid resolutions for larger and larger catchment areas. It was argued, however, that this trend does not necessarily guarantee better understanding of the hydrological processes or it is even not necessary for specific modelling applications. In the present study, this topic is further discussed in relation to the soil spatial heterogeneity and its effect on simulated hydrological state and fluxes. To this end, three methods are developed and used for the characterization of the soil heterogeneity at different spatial scales. The methods are applied at the soil map of the upper Neckar catchment (Germany), as example. The different soil realizations are assessed regarding their impact on simulated state and fluxes using the distributed hydrological model mHM. The results are analysed by aggregating the model outputs at different spatial scales based on the Representative Elementary Scale concept (RES) proposed by Refsgaard et al. (2016). The analysis is further extended in the present study by aggregating the model output also at different temporal scales. The results show that small scale soil variabilities are not relevant when the integrated hydrological responses are considered e.g., simulated streamflow or average soil moisture over sub-catchments. On the contrary, these small scale soil variabilities strongly affect locally simulated states and fluxes i.e., soil moisture and evapotranspiration simulated at the grid resolution. A clear trade-off is also detected by aggregating the model output by spatial and temporal scales. Despite the scale at which the soil variabilities are (or are not) relevant is not universal, the RES concept provides a simple and effective framework to quantify the predictive capability of distributed models and to identify the need for further model improvements e.g., finer resolution input. For this reason, the integration in this analysis of all the relevant input factors (e.g., precipitation, vegetation, geology) could provide a strong support for the definition of the right scale for each specific model application. In this context, however, the main challenge for a proper model assessment will be the correct characterization of the spatio- temporal variability of each input factor. Refsgaard, J.C., Højberg, A.L., He, X., Hansen, A.L., Rasmussen, S.H., Stisen, S., 2016. Where are the limits of model predictive capabilities?: Representative Elementary Scale - RES. Hydrol. Process. doi:10.1002/hyp.11029

  14. Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai

    2013-04-01

    The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.

  15. A behavioral-level HDL description of SFQ logic circuits for quantitative performance analysis of large-scale SFQ digital systems

    NASA Astrophysics Data System (ADS)

    Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.

    2003-10-01

    Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.

  16. 3D PIC SIMULATIONS OF COLLISIONLESS SHOCKS AT LUNAR MAGNETIC ANOMALIES AND THEIR ROLE IN FORMING LUNAR SWIRLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bamford, R. A.; Kellett, B. J.; Alves, E. P.

    Investigation of the lunar crustal magnetic anomalies offers a comprehensive long-term data set of observations of small-scale magnetic fields and their interaction with the solar wind. In this paper a review of the observations of lunar mini-magnetospheres is compared quantifiably with theoretical kinetic-scale plasma physics and 3D particle-in-cell simulations. The aim of this paper is to provide a complete picture of all the aspects of the phenomena and to show how the observations from all the different and international missions interrelate. The analysis shows that the simulations are consistent with the formation of miniature (smaller than the ion Larmor orbit)more » collisionless shocks and miniature magnetospheric cavities, which has not been demonstrated previously. The simulations reproduce the finesse and form of the differential proton patterns that are believed to be responsible for the creation of both the “lunar swirls” and “dark lanes.” Using a mature plasma physics code like OSIRIS allows us, for the first time, to make a side-by-side comparison between model and space observations. This is shown for all of the key plasma parameters observed to date by spacecraft, including the spectral imaging data of the lunar swirls. The analysis of miniature magnetic structures offers insight into multi-scale mechanisms and kinetic-scale aspects of planetary magnetospheres.« less

  17. Large-scale drivers of local precipitation extremes in convection-permitting climate simulations

    NASA Astrophysics Data System (ADS)

    Chan, Steven C.; Kendon, Elizabeth J.; Roberts, Nigel M.; Fowler, Hayley J.; Blenkinsop, Stephen

    2016-04-01

    The Met Office 1.5-km UKV convective-permitting models (CPM) is used to downscale present-climate and RCP8.5 60-km HadGEM3 GCM simulations. Extreme UK hourly precipitation intensities increase with local near-surface temperatures and humidity; for temperature, the simulated increase rate for the present-climate simulation is about 6.5% K**-1, which is consistent with observations and theoretical expectations. While extreme intensities are higher in the RCP8.5 simulation as higher temperatures are sampled, there is a decline at the highest temperatures due to circulation and relative humidity changes. Extending the analysis to the broader synoptic scale, it is found that circulation patterns, as diagnosed by MSLP or circulation type, play an increased role in the probability of extreme precipitation in the RCP8.5 simulation. Nevertheless for both CPM simulations, vertical instability is the principal driver for extreme precipitation.

  18. Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).

    PubMed

    Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando

    2014-07-02

    Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  19. AQMEII3 evaluation of regional NA/EU simulations and ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII. The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impac

  20. The German VR Simulation Realism Scale--psychometric construction for virtual reality applications with virtual humans.

    PubMed

    Poeschl, Sandra; Doering, Nicola

    2013-01-01

    Virtual training applications with high levels of immersion or fidelity (for example for social phobia treatment) produce high levels of presence and therefore belong to the most successful Virtual Reality developments. Whereas display and interaction fidelity (as sub-dimensions of immersion) and their influence on presence are well researched, realism of the displayed simulation depends on the specific application and is therefore difficult to measure. We propose to measure simulation realism by using a self-report questionnaire. The German VR Simulation Realism Scale for VR training applications was developed based on a translation of scene realism items from the Witmer-Singer-Presence Questionnaire. Items for realism of virtual humans (for example for social phobia training applications) were supplemented. A sample of N = 151 students rated simulation realism of a Fear of Public Speaking application. Four factors were derived by item- and principle component analysis (Varimax rotation), representing Scene Realism, Audience Behavior, Audience Appearance and Sound Realism. The scale developed can be used as a starting point for future research and measurement of simulation realism for applications including virtual humans.

  1. High-resolution simulation of deep pencil beam surveys - analysis of quasi-periodicity

    NASA Astrophysics Data System (ADS)

    Weiss, A. G.; Buchert, T.

    1993-07-01

    We carry out pencil beam constructions in a high-resolution simulation of the large-scale structure of galaxies. The initial density fluctuations are taken to have a truncated power spectrum. All the models have {OMEGA} = 1. As an example we present the results for the case of "Hot-Dark-Matter" (HDM) initial conditions with scale-free n = 1 power index on large scales as a representative of models with sufficient large-scale power. We use an analytic approximation for particle trajectories of a self-gravitating dust continuum and apply a local dynamical biasing of volume elements to identify luminous matter in the model. Using this method, we are able to resolve formally a simulation box of 1200h^-1^ Mpc (e.g. for HDM initial conditions) down to the scale of galactic halos using 2160^3^ particles. We consider this as the minimal resolution necessary for a sensible simulation of deep pencil beam data. Pencil beam probes are taken for a given epoch using the parameters of observed beams. In particular, our analysis concentrates on the detection of a quasi-periodicity in the beam probes using several different methods. The resulting beam ensembles are analyzed statistically using number distributions, pair-count histograms, unnormalized pair-counts, power spectrum analysis and trial-period folding. Periodicities are classified according to their significance level in the power spectrum of the beams. The simulation is designed for application to parameter studies which prepare future observational projects. We find that a large percentage of the beams show quasi- periodicities with periods which cluster at a certain length scale. The periods found range between one and eight times the cutoff length in the initial fluctuation spectrum. At significance levels similar to those of the data of Broadhurst et al. (1990), we find about 15% of the pencil beams to show periodicities, about 30% of which are around the mean separation of rich clusters, while the distribution of scales reaches values of more than 200h^-1^ Mpc. The detection of periodicities larger than the typical void size must not be due to missing of "walls" (like the so called "Great Wall" seen in the CfA catalogue of galaxies), but can be due to different clustering properties of galaxies along the beams.

  2. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    PubMed

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. WESTPA: An interoperable, highly scalable software package for weighted ensemble simulation and analysis

    PubMed Central

    Zwier, Matthew C.; Adelman, Joshua L.; Kaus, Joseph W.; Pratt, Adam J.; Wong, Kim F.; Rego, Nicholas B.; Suárez, Ernesto; Lettieri, Steven; Wang, David W.; Grabe, Michael; Zuckerman, Daniel M.; Chong, Lillian T.

    2015-01-01

    The weighted ensemble (WE) path sampling approach orchestrates an ensemble of parallel calculations with intermittent communication to enhance the sampling of rare events, such as molecular associations or conformational changes in proteins or peptides. Trajectories are replicated and pruned in a way that focuses computational effort on under-explored regions of configuration space while maintaining rigorous kinetics. To enable the simulation of rare events at any scale (e.g. atomistic, cellular), we have developed an open-source, interoperable, and highly scalable software package for the execution and analysis of WE simulations: WESTPA (The Weighted Ensemble Simulation Toolkit with Parallelization and Analysis). WESTPA scales to thousands of CPU cores and includes a suite of analysis tools that have been implemented in a massively parallel fashion. The software has been designed to interface conveniently with any dynamics engine and has already been used with a variety of molecular dynamics (e.g. GROMACS, NAMD, OpenMM, AMBER) and cell-modeling packages (e.g. BioNetGen, MCell). WESTPA has been in production use for over a year, and its utility has been demonstrated for a broad set of problems, ranging from atomically detailed host-guest associations to non-spatial chemical kinetics of cellular signaling networks. The following describes the design and features of WESTPA, including the facilities it provides for running WE simulations, storing and analyzing WE simulation data, as well as examples of input and output. PMID:26392815

  4. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  5. Using multi-scale entropy and principal component analysis to monitor gears degradation via the motor current signature analysis

    NASA Astrophysics Data System (ADS)

    Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir

    2017-06-01

    This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.

  6. Methods for Improving Fine-Scale Applications of the WRF-CMAQ Modeling System

    EPA Science Inventory

    Presentation on the work in AMAD to improve fine-scale (e.g. 4km and 1km) WRF-CMAQ simulations. Includes iterative analysis, updated sea surface temperature and snow cover fields, and inclusion of impervious surface information (urban parameterization).

  7. A statistical analysis of the elastic distortion and dislocation density fields in deformed crystals

    DOE PAGES

    Mohamed, Mamdouh S.; Larson, Bennett C.; Tischler, Jonathan Z.; ...

    2015-05-18

    The statistical properties of the elastic distortion fields of dislocations in deforming crystals are investigated using the method of discrete dislocation dynamics to simulate dislocation structures and dislocation density evolution under tensile loading. Probability distribution functions (PDF) and pair correlation functions (PCF) of the simulated internal elastic strains and lattice rotations are generated for tensile strain levels up to 0.85%. The PDFs of simulated lattice rotation are compared with sub-micrometer resolution three-dimensional X-ray microscopy measurements of rotation magnitudes and deformation length scales in 1.0% and 2.3% compression strained Cu single crystals to explore the linkage between experiment and the theoreticalmore » analysis. The statistical properties of the deformation simulations are analyzed through determinations of the Nye and Kr ner dislocation density tensors. The significance of the magnitudes and the length scales of the elastic strain and the rotation parts of dislocation density tensors are demonstrated, and their relevance to understanding the fundamental aspects of deformation is discussed.« less

  8. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  9. Interstitial and Interlayer Ion Diffusion Geometry Extraction in Graphitic Nanosphere Battery Materials.

    PubMed

    Gyulassy, Attila; Knoll, Aaron; Lau, Kah Chun; Wang, Bei; Bremer, Peer-Timo; Papka, Michael E; Curtiss, Larry A; Pascucci, Valerio

    2016-01-01

    Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermally annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.

  10. Interstitial and Interlayer Ion Diffusion Geometry Extraction in Graphitic Nanosphere Battery Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyulassy, Attila; Knoll, Aaron; Lau, Kah Chun

    2016-01-01

    Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermallymore » annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.« less

  11. Interstitial and interlayer ion diffusion geometry extraction in graphitic nanosphere battery materials

    DOE PAGES

    Gyulassy, Attila; Knoll, Aaron; Lau, Kah Chun; ...

    2016-01-31

    Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermallymore » annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Lastly, our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.« less

  12. Cascaded analysis of signal and noise propagation through a heterogeneous breast model.

    PubMed

    Mainprize, James G; Yaffe, Martin J

    2010-10-01

    The detectability of lesions in radiographic images can be impaired by patterns caused by the surrounding anatomic structures. The presence of such patterns is often referred to as anatomic noise. Others have previously extended signal and noise propagation theory to include variable background structure as an additional noise term and used in simulations for analysis by human and ideal observers. Here, the analytic forms of the signal and noise transfer are derived to obtain an exact expression for any input random distribution and the "power law" filter used to generate the texture of the tissue distribution. A cascaded analysis of propagation through a heterogeneous model is derived for x-ray projection through simulated heterogeneous backgrounds. This is achieved by considering transmission through the breast as a correlated amplification point process. The analytic forms of the cascaded analysis were compared to monoenergetic Monte Carlo simulations of x-ray propagation through power law structured backgrounds. As expected, it was found that although the quantum noise power component scales linearly with the x-ray signal, the anatomic noise will scale with the square of the x-ray signal. There was a good agreement between results obtained using analytic expressions for the noise power and those from Monte Carlo simulations for different background textures, random input functions, and x-ray fluence. Analytic equations for the signal and noise properties of heterogeneous backgrounds were derived. These may be used in direct analysis or as a tool to validate simulations in evaluating detectability.

  13. Phase and vortex correlations in superconducting Josephson-junction arrays at irrational magnetic frustration.

    PubMed

    Granato, Enzo

    2008-07-11

    Phase coherence and vortex order in a Josephson-junction array at irrational frustration are studied by extensive Monte Carlo simulations using the parallel-tempering method. A scaling analysis of the correlation length of phase variables in the full equilibrated system shows that the critical temperature vanishes with a power-law divergent correlation length and critical exponent nuph, in agreement with recent results from resistivity scaling analysis. A similar scaling analysis for vortex variables reveals a different critical exponent nuv, suggesting that there are two distinct correlation lengths associated with a decoupled zero-temperature phase transition.

  14. Assessing and mapping spatial associations among oral cancer mortality rates, concentrations of heavy metals in soil, and land use types based on multiple scale data.

    PubMed

    Lin, Wei-Chih; Lin, Yu-Pin; Wang, Yung-Chieh; Chang, Tsun-Kuo; Chiang, Li-Chi

    2014-02-21

    In this study, a deconvolution procedure was used to create a variogram of oral cancer (OC) rates. Based on the variogram, area-to-point (ATP) Poisson kriging and p-field simulation were used to downscale and simulate, respectively, the OC rate data for Taiwan from the district scale to a 1 km × 1 km grid scale. Local cluster analysis (LCA) of OC mortality rates was then performed to identify OC mortality rate hot spots based on the downscaled and the p-field-simulated OC mortality maps. The relationship between OC mortality and land use was studied by overlapping the maps of the downscaled OC mortality, the LCA results, and the land uses. One thousand simulations were performed to quantify local and spatial uncertainties in the LCA to identify OC mortality hot spots. The scatter plots and Spearman's rank correlation yielded the relationship between OC mortality and concentrations of the seven metals in the 1 km cell grid. The correlation analysis results for the 1 km scale revealed a weak correlation between OC mortality rate and concentrations of the seven studied heavy metals in soil. Accordingly, the heavy metal concentrations in soil are not major determinants of OC mortality rates at the 1 km scale at which soils were sampled. The LCA statistical results for local indicator of spatial association (LISA) revealed that the sites with high probability of high-high (high value surrounded by high values) OC mortality at the 1 km grid scale were clustered in southern, eastern, and mid-western Taiwan. The number of such sites was also significantly higher on agricultural land and in urban regions than on land with other uses. The proposed approach can be used to downscale and evaluate uncertainty in mortality data from a coarse scale to a fine scale at which useful additional information can be obtained for assessing and managing land use and risk.

  15. Integrating neuroinformatics tools in TheVirtualBrain.

    PubMed

    Woodman, M Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor

    2014-01-01

    TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting.

  16. Integrating neuroinformatics tools in TheVirtualBrain

    PubMed Central

    Woodman, M. Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A.; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor

    2014-01-01

    TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting. PMID:24795617

  17. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire.

    PubMed

    Unver, Vesile; Basak, Tulay; Watts, Penni; Gaioso, Vanessa; Moss, Jacqueline; Tastan, Sevinc; Iyigun, Emine; Tosun, Nuran

    2017-02-01

    The purpose of this study was to adapt the "Student Satisfaction and Self-Confidence in Learning Scale" (SCLS), "Simulation Design Scale" (SDS), and "Educational Practices Questionnaire" (EPQ) developed by Jeffries and Rizzolo into Turkish and establish the reliability and the validity of these translated scales. A sample of 87 nursing students participated in this study. These scales were cross-culturally adapted through a process including translation, comparison with original version, back translation, and pretesting. Construct validity was evaluated by factor analysis, and criterion validity was evaluated using the Perceived Learning Scale, Patient Intervention Self-confidence/Competency Scale, and Educational Belief Scale. Cronbach's alpha values were found as 0.77-0.85 for SCLS, 0.73-0.86 for SDS, and 0.61-0.86 for EPQ. The results of this study show that the Turkish versions of all scales are validated and reliable measurement tools.

  18. Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications

    DTIC Science & Technology

    2016-10-17

    finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16

  19. Ranges of Applicability for the Continuum-beam Model in the Constitutive Analysis of Carbon Nanotubes: Nanotubes or Nano-beams?

    NASA Technical Reports Server (NTRS)

    Harik, Vasyl Michael; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Ranges of validity for the continuum-beam model, the length-scale effects and continuum assumptions are analyzed in the framework of scaling analysis of NT structure. Two coupled criteria for the applicability of the continuum model are presented. Scaling analysis of NT buckling and geometric parameters (e.g., diameter and length) is carried out to determine the key non-dimensional parameters that control the buckling strains and modes of NT buckling. A model applicability map, which represents two classes of NTs, is constructed in the space of non-dimensional parameters. In an analogy with continuum mechanics, a mechanical law of geometric similitude is presented for two classes of beam-like NTs having different geometries. Expressions for the critical buckling loads and strains are tailored for the distinct groups of NTs and compared with the data provided by the molecular dynamics simulations. Implications for molecular dynamics simulations and the NT-based scanning probes are discussed.

  20. A multi-scale model for geared transmission aero-thermodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Sean M.

    A multi-scale, multi-physics computational tool for the simulation of high-per- formance gearbox aero-thermodynamics was developed and applied to equilibrium and pathological loss-of-lubrication performance simulation. The physical processes at play in these systems include multiphase compressible ow of the air and lubricant within the gearbox, meshing kinematics and tribology, as well as heat transfer by conduction, and free and forced convection. These physics are coupled across their representative space and time scales in the computational framework developed in this dissertation. These scales span eight orders of magnitude, from the thermal response of the full gearbox O(100 m; 10 2 s), through effects at the tooth passage time scale O(10-2 m; 10-4 s), down to tribological effects on the meshing gear teeth O(10-6 m; 10-6 s). Direct numerical simulation of these coupled physics and scales is intractable. Accordingly, a scale-segregated simulation strategy was developed by partitioning and treating the contributing physical mechanisms as sub-problems, each with associated space and time scales, and appropriate coupling mechanisms. These are: (1) the long time scale thermal response of the system, (2) the multiphase (air, droplets, and film) aerodynamic flow and convective heat transfer within the gearbox, (3) the high-frequency, time-periodic thermal effects of gear tooth heating while in mesh and its subsequent cooling through the rest of rotation, (4) meshing effects including tribology and contact mechanics. The overarching goal of this dissertation was to develop software and analysis procedures for gearbox loss-of-lubrication performance. To accommodate these four physical effects and their coupling, each is treated in the CFD code as a sub problem. These physics modules are coupled algorithmically. Specifically, the high- frequency conduction analysis derives its local heat transfer coefficient and near-wall air temperature boundary conditions from a quasi-steady cyclic-symmetric simulation of the internal flow. This high-frequency conduction solution is coupled directly with a model for the meshing friction, developed by a collaborator, which was adapted for use in a finite-volume CFD code. The local surface heat flux on solid surfaces is calculated by time-averaging the heat flux in the high-frequency analysis. This serves as a fixed-flux boundary condition in the long time scale conduction module. The temperature distribution from this long time scale heat transfer calculation serves as a boundary condition for the internal convection simulation, and as the initial condition for the high-frequency heat transfer module. Using this multi-scale model, simulations were performed for equilibrium and loss-of-lubrication operation of the NASA Glenn Research Center test stand. Results were compared with experimental measurements. In addition to the multi-scale model itself, several other specific contributions were made. Eulerian models for droplets and wall-films were developed and im- plemented in the CFD code. A novel approach to retaining liquid film on the solid surfaces, and strategies for its mass exchange with droplets, were developed and verified. Models for interfacial transfer between droplets and wall-film were implemented, and include the effects of droplet deposition, splashing, bouncing, as well as film breakup. These models were validated against airfoil data. To mitigate the observed slow convergence of CFD simulations of the enclosed aerodynamic flows within gearboxes, Fourier stability analysis was applied to the SIMPLE-C fractional-step algorithm. From this, recommendations to accelerate the convergence rate through enhanced pressure-velocity coupling were made. These were shown to be effective. A fast-running finite-volume reduced-order-model of the gearbox aero-thermo- dynamics was developed, and coupled with the tribology model to investigate the sensitivity of loss-of-lubrication predictions to various model and physical param- eters. This sensitivity study was instrumental in guiding efforts toward improving the accuracy of the multi-scale model without undue increase in computational cost. In addition, the reduced-order model is now used extensively by a collaborator in tribology model development and testing. Experimental measurements of high-speed gear windage in partially and fully- shrouded configurations were performed to supplement the paucity of available validation data. This measurement program provided measurements of windage loss for a gear of design-relevant size and operating speed, as well as guidance for increasing the accuracy of future measurements.

  1. Eddy Fluxes and Sensitivity of the Water Cycle to Spatial Resolution in Idealized Regional Aquaplanet Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Leung, Lai-Yung R.; Gustafson, William I.

    2014-02-28

    A multi-scale moisture budget analysis is used to identify the mechanisms responsible for the sensitivity of the water cycle to spatial resolution using idealized regional aquaplanet simulations. In the higher resolution simulations, moisture transport by eddies fluxes dry the boundary layer enhancing evaporation and precipitation. This effect of eddies, which is underestimated by the physics parameterizations in the low-resolution simulations, is found to be responsible for the sensitivity of the water cycle both directly, and through its upscale effect, on the mean circulation. Correlations among moisture transport by eddies at adjacent ranges of scales provides the potential for reducing thismore » sensitivity by representing the unresolved eddies by their marginally resolved counterparts.« less

  2. Simulation of wave propagation in three-dimensional random media

    NASA Technical Reports Server (NTRS)

    Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1993-01-01

    Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.

  3. Seafloor identification in sonar imagery via simulations of Helmholtz equations and discrete optimization

    NASA Astrophysics Data System (ADS)

    Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin

    2017-06-01

    We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.

  4. Finite Element Simulation of Three Full-Scale Crash Tests for Cessna 172 Aircraft

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Warren, Jerry E., Jr.

    2017-01-01

    The NASA Emergency Locator Transmitter Survivability and Reliability (ELT-SAR) project was initiated in 2013 to assess the crash performance standards for the next generation of emergency locator transmitter (ELT) systems. Three Cessna 172 aircraft were acquired to perform crash testing at NASA Langley Research Center's Landing and Impact Research Facility. Full-scale crash tests were conducted in the summer of 2015 and each test article was subjected to severe, but survivable, impact conditions including a flare-to-stall during emergency landing, and two controlled-flight-into-terrain scenarios. Full-scale finite element analyses were performed using a commercial explicit solver, ABAQUS. The first test simulated impacting a concrete surface represented analytically by a rigid plane. Tests 2 and 3 simulated impacting a dirt surface represented analytically by an Eulerian grid of brick elements using a Mohr-Coulomb material model. The objective of this paper is to summarize the test and analysis results for the three full-scale crash tests. Simulation models of the airframe which correlate well with the tests are needed for future studies of alternate ELT mounting configurations.

  5. Development of a Aerothermoelastic-Acoustics Simulation Capability of Flight Vehicles

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Choi, S. B.; Ibrahim, A.

    2010-01-01

    A novel numerical, finite element based analysis methodology is presented in this paper suitable for accurate and efficient simulation of practical, complex flight vehicles. An associated computer code, developed in this connection, is also described in some detail. Thermal effects of high speed flow obtained from a heat conduction analysis are incorporated in the modal analysis which in turn affects the unsteady flow arising out of interaction of elastic structures with the air. Numerical examples pertaining to representative problems are given in much detail testifying to the efficacy of the advocated techniques. This is a unique implementation of temperature effects in a finite element CFD based multidisciplinary simulation analysis capability involving large scale computations.

  6. Diagnostic Analysis of Ozone Concentrations Simulated by Two Regional-Scale Air Quality Models

    EPA Science Inventory

    Since the Community Multiscale Air Quality modeling system (CMAQ) and the Weather Research and Forecasting with Chemistry model (WRF/Chem) use different approaches to simulate the interaction of meteorology and chemistry, this study compares the CMAQ and WRF/Chem air quality simu...

  7. Scaling laws for impact fragmentation of spherical solids.

    PubMed

    Timár, G; Kun, F; Carmona, H A; Herrmann, H J

    2012-07-01

    We investigate the impact fragmentation of spherical solid bodies made of heterogeneous brittle materials by means of a discrete element model. Computer simulations are carried out for four different system sizes varying the impact velocity in a broad range. We perform a finite size scaling analysis to determine the critical exponents of the damage-fragmentation phase transition and deduce scaling relations in terms of radius R and impact velocity v(0). The scaling analysis demonstrates that the exponent of the power law distributed fragment mass does not depend on the impact velocity; the apparent change of the exponent predicted by recent simulations can be attributed to the shifting cutoff and to the existence of unbreakable discrete units. Our calculations reveal that the characteristic time scale of the breakup process has a power law dependence on the impact speed and on the distance from the critical speed in the damaged and fragmented states, respectively. The total amount of damage is found to have a similar behavior, which is substantially different from the logarithmic dependence on the impact velocity observed in two dimensions.

  8. Macroturbulence in Very High Resolution Atmospheric Models: Evidence for Two Scaling Regimes

    NASA Astrophysics Data System (ADS)

    Straus, D. M.

    2010-12-01

    The macro-turbulent properties of the atmosphere's circulation are examined in a number of very high resolution seasonal simulations using the global Nonhydrostatic ICosahedral Atmospheric Model (NICAM) at 7-km horizontal resolution (40 levels), and the forecast model of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279 and T2047 spectral resolutions (90-levels). These simulations were carried out as part of an extraordinary collaborative project between the Center for Ocean-Land-Atmosphere Studies (COLA), the University of Tokyo, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), ECMWF, and the National Institute of Computational Sciences (NICS) The goals of the analysis are to document the rotational and divergence kinetic energy spectral characteristics, to shed light on the different scaling regimes obtained and the role of non-hydrostatic dynamics, and to asses the effects of the smallest scales on the cascades of energy. Simulations with all the models show some evidence of two scaling regimes (power law with steep slope, and a distinctly more shallow slope at smaller scales) for both rotational and divergent kinetic energy. The strength of the evidence for the two-regimes, as well as the wavenumber ranges in which they occur, do differ between models. Analysis of different time scale contributions to the spectra lend insight into the energy transfer mechanism. The implications for dynamical theories of turbulent energy exchange are discussed, as well as difference in approach to compared with multiplicative cascade theories.

  9. Interactive Exploration and Analysis of Large-Scale Simulations Using Topology-Based Data Segmentation.

    PubMed

    Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B

    2011-09-01

    Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting and analyzing burning cells from a large-scale turbulent combustion simulation. In particular, we show how the statistical analysis enabled by our techniques provides new insight into the combustion process.

  10. Detection of feigned mental disorders on the personality assessment inventory: a discriminant analysis.

    PubMed

    Rogers, R; Sewell, K W; Morey, L C; Ustad, K L

    1996-12-01

    Psychological assessment with multiscale inventories is largely dependent on the honesty and forthrightness of those persons evaluated. We investigated the effectiveness of the Personality Assessment Inventory (PAI) in detecting participants feigning three specific disorders: schizophrenia, major depression, and generalized anxiety disorder. With a simulation design, we tested the PAI validity scales on 166 naive (undergraduates with minimal preparation) and 80 sophisticated (doctoral psychology students with 1 week preparation) participants. We compared their results to persons with the designated disorders: schizophrenia (n = 45), major depression (n = 136), and generalized anxiety disorder (n = 40). Although moderately effective with naive simulators, the validity scales evidenced only modest positive predictive power with their sophisticated counterparts. Therefore, we performed a two-stage discriminant analysis that yielded a moderately high hit rate (> 80%) that was maintained in the cross-validation sample, irrespective of the feigned disorder or the sophistication of the simulators.

  11. A Priori Analysis of Subgrid-Scale Models for Large Eddy Simulations of Supercritical Binary-Species Mixing Layers

    NASA Technical Reports Server (NTRS)

    Okong'o, Nora; Bellan, Josette

    2005-01-01

    Models for large eddy simulation (LES) are assessed on a database obtained from direct numerical simulations (DNS) of supercritical binary-species temporal mixing layers. The analysis is performed at the DNS transitional states for heptane/nitrogen, oxygen/hydrogen and oxygen/helium mixing layers. The incorporation of simplifying assumptions that are validated on the DNS database leads to a set of LES equations that requires only models for the subgrid scale (SGS) fluxes, which arise from filtering the convective terms in the DNS equations. Constant-coefficient versions of three different models for the SGS fluxes are assessed and calibrated. The Smagorinsky SGS-flux model shows poor correlations with the SGS fluxes, while the Gradient and Similarity models have high correlations, as well as good quantitative agreement with the SGS fluxes when the calibrated coefficients are used.

  12. A simulation of high energy cosmic ray propagation 1

    NASA Technical Reports Server (NTRS)

    Honda, M.; Kifune, T.; Matsubara, Y.; Mori, M.; Nishijima, K.; Teshima, M.

    1985-01-01

    High energy cosmic ray propagation of the energy region 10 to the 14.5 power - 10 to the 18th power eV is simulated in the inter steller circumstances. In conclusion, the diffusion process by turbulent magnetic fields is classified into several regions by ratio of the gyro-radius and the scale of turbulence. When the ratio becomes larger then 10 to the minus 0.5 power, the analysis with the assumption of point scattering can be applied with the mean free path E sup 2. However, when the ratio is smaller than 10 to the minus 0.5 power, we need a more complicated analysis or simulation. Assuming the turbulence scale of magnetic fields of the Galaxy is 10-30pc and the mean magnetic field strength is 3 micro gauss, the energy of cosmic ray with that gyro-radius is about 10 to the 16.5 power eV.

  13. Global sensitivity analysis of multiscale properties of porous materials

    NASA Astrophysics Data System (ADS)

    Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.

    2018-02-01

    Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.

  14. Divertor heat flux simulations in ELMy H-mode discharges of EAST

    NASA Astrophysics Data System (ADS)

    Xia, T. Y.; Xu, X. Q.; Wu, Y. B.; Huang, Y. Q.; Wang, L.; Zheng, Z.; Liu, J. B.; Zang, Q.; Li, Y. Y.; Zhao, D.; EAST Team

    2017-11-01

    This paper presents heat flux simulations for the ELMy H-mode on the Experimental Advanced Superconducting Tokamak (EAST) using a six-field two-fluid model in BOUT++. Three EAST ELMy H-mode discharges with different plasma currents I p and geometries are studied. The trend of the scrape-off layer width λq with I p is reproduced by the simulation. The simulated width is only half of that derived from the EAST scaling law, but agrees well with the international multi-machine scaling law. Note that there is no radio-frequency (RF) heating scheme in the simulations, and RF heating can change the boundary topology and increase the flux expansion. Anomalous electron transport is found to contribute to the divertor heat fluxes. A coherent mode is found in the edge region in simulations. The frequency and poloidal wave number kθ are in the range of the edge coherent mode in EAST. The magnetic fluctuations of the mode are smaller than the electric field fluctuations. Statistical analysis of the type of turbulence shows that the turbulence transport type (blobby or turbulent) does not influence the heat flux width scaling. The two-point model differs from the simulation results but the drift-based model shows good agreement with simulations.

  15. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-01

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of Cα atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  16. Testing the role of bedforms as controls on the morphodynamics of sandy braided rivers with CFD

    NASA Astrophysics Data System (ADS)

    Unsworth, C. A.; Nicholas, A. P.; Ashworth, P. J.; Best, J.; Lane, S. N.; Parsons, D. R.; Sambrook Smith, G.; Simpson, C.; Strick, R. J. P.

    2017-12-01

    Sand-bed rivers are characterised by multiple scales of topography (e.g., channels, bars and bedforms). Small scale topographic features (e.g., dunes) exert a significant influence on coherent flow structures and sediment transport processes, over distances that scale with channel depth. However, the extent to which such dune-scale effects control larger, channel and bar-scale morphology and morphodynamics remains unknown. Moreover, such bedform effects are typically neglected in two-dimensional (depth-averaged) morphodynamic models that are used to simulate river evolution. To evaluate the significance of these issues, we report results from a combined numerical modelling and field monitoring study, undertaken in the South Saskatchewan River, Canada. Numerical simulations were carried out, using the OpenFOAM CFD code, to quantify the mean three-dimensional flow structure within a 90 x 350 m section of channel. To isolate the role of bedforms as a control on flow and sediment transport, two simulations were undertaken. The first used a high-resolution ( 3 cm) bedform-resolving DEM. The second used a filtered DEM in which dunes were removed and only large scale topographic features (e.g., bars, scour pools etc) were resolved. The results of these simulations are compared here, in order to quantify the degree to which topographic steering by bedforms influences flow and sediment transport directions at bar and channel scales. Analysis of the CFD simulation results within a 2D morphodynamic modelling framework demonstrates that dunes exert a significant influence on sediment transport, and hence morphodynamics, and highlights important shortcomings in existing 2D model parameterisations of topographic steering.

  17. Super massive black hole in galactic nuclei with tidal disruption of stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Shiyan; Berczik, Peter; Spurzem, Rainer

    Tidal disruption of stars by super massive central black holes from dense star clusters is modeled by high-accuracy direct N-body simulation. The time evolution of the stellar tidal disruption rate, the effect of tidal disruption on the stellar density profile, and, for the first time, the detailed origin of tidally disrupted stars are carefully examined and compared with classic papers in the field. Up to 128k particles are used in simulation to model the star cluster around a super massive black hole, and we use the particle number and the tidal radius of the black hole as free parameters formore » a scaling analysis. The transition from full to empty loss-cone is analyzed in our data, and the tidal disruption rate scales with the particle number, N, in the expected way for both cases. For the first time in numerical simulations (under certain conditions) we can support the concept of a critical radius of Frank and Rees, which claims that most stars are tidally accreted on highly eccentric orbits originating from regions far outside the tidal radius. Due to the consumption of stars moving on radial orbits, a velocity anisotropy is found inside the cluster. Finally we estimate the real galactic center based on our simulation results and the scaling analysis.« less

  18. Super Massive Black Hole in Galactic Nuclei with Tidal Disruption of Stars

    NASA Astrophysics Data System (ADS)

    Zhong, Shiyan; Berczik, Peter; Spurzem, Rainer

    2014-09-01

    Tidal disruption of stars by super massive central black holes from dense star clusters is modeled by high-accuracy direct N-body simulation. The time evolution of the stellar tidal disruption rate, the effect of tidal disruption on the stellar density profile, and, for the first time, the detailed origin of tidally disrupted stars are carefully examined and compared with classic papers in the field. Up to 128k particles are used in simulation to model the star cluster around a super massive black hole, and we use the particle number and the tidal radius of the black hole as free parameters for a scaling analysis. The transition from full to empty loss-cone is analyzed in our data, and the tidal disruption rate scales with the particle number, N, in the expected way for both cases. For the first time in numerical simulations (under certain conditions) we can support the concept of a critical radius of Frank & Rees, which claims that most stars are tidally accreted on highly eccentric orbits originating from regions far outside the tidal radius. Due to the consumption of stars moving on radial orbits, a velocity anisotropy is found inside the cluster. Finally we estimate the real galactic center based on our simulation results and the scaling analysis.

  19. Small scale rainfall simulators: Challenges for a future use in soil erosion research

    NASA Astrophysics Data System (ADS)

    Ries, Johannes B.; Iserloh, Thomas; Seeger, Manuel

    2013-04-01

    Rainfall simulation on micro-plot scale is a method used worldwide to assess the generation of overland flow, soil erosion, infiltration and interrelated processes such as soil sealing, crusting, splash and redistribution of solids and solutes. The produced data are of great significance not only for the analysis of the simulated processes, but also as a source of input-data for soil erosion modelling. The reliability of the data is therefore of paramount importance, and quality management of rainfall simulation procedure a general responsibility of the rainfall simulation community. This was an accepted outcome at the "International Rainfall Simulator Workshop 2011" at Trier University. The challenges of the present and near future use of small scale rainfall simulations concern the comparability of results and scales, the quality of the data for soil erosion modelling, and further technical developments to overcome physical limitations and constraints. Regarding the high number of research questions, different fields of application, and due to the great technical creativity of researchers, a large number of different types of rainfall simulators is available. But each of the devices produces a different rainfall, leading to different kinetic energy values influencing soil surface and erosion processes. Plot sizes are also variable, as well as the experimental simulation procedures. As a consequence, differing runoff and erosion results are produced. The presentation summarises the three important aspects of rainfall simulations, following a processual order: 1. Input-factor "rain" and its calibration 2. Surface-factor "plot" and its documentation 3. Output-factors "runoff" and "sediment concentration" Finally, general considerations about the limitations and challenges for further developments and applications of rainfall simulation data are presented.

  20. PARALLEL HOP: A SCALABLE HALO FINDER FOR MASSIVE COSMOLOGICAL DATA SETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skory, Stephen; Turk, Matthew J.; Norman, Michael L.

    2010-11-15

    Modern N-body cosmological simulations contain billions (10{sup 9}) of dark matter particles. These simulations require hundreds to thousands of gigabytes of memory and employ hundreds to tens of thousands of processing cores on many compute nodes. In order to study the distribution of dark matter in a cosmological simulation, the dark matter halos must be identified using a halo finder, which establishes the halo membership of every particle in the simulation. The resources required for halo finding are similar to the requirements for the simulation itself. In particular, simulations have become too extensive to use commonly employed halo finders, suchmore » that the computational requirements to identify halos must now be spread across multiple nodes and cores. Here, we present a scalable-parallel halo finding method called Parallel HOP for large-scale cosmological simulation data. Based on the halo finder HOP, it utilizes message passing interface and domain decomposition to distribute the halo finding workload across multiple compute nodes, enabling analysis of much larger data sets than is possible with the strictly serial or previous parallel implementations of HOP. We provide a reference implementation of this method as a part of the toolkit {sup yt}, an analysis toolkit for adaptive mesh refinement data that include complementary analysis modules. Additionally, we discuss a suite of benchmarks that demonstrate that this method scales well up to several hundred tasks and data sets in excess of 2000{sup 3} particles. The Parallel HOP method and our implementation can be readily applied to any kind of N-body simulation data and is therefore widely applicable.« less

  1. Pore-scale observation and 3D simulation of wettability effects on supercritical CO2 - brine immiscible displacement in drainage

    NASA Astrophysics Data System (ADS)

    Hu, R.; Wan, J.; Chen, Y.

    2016-12-01

    Wettability is a factor controlling the fluid-fluid displacement pattern in porous media and significantly affects the flow and transport of supercritical (sc) CO2 in geologic carbon sequestration. Using a high-pressure micromodel-microscopy system, we performed drainage experiments of scCO2 invasion into brine-saturated water-wet and intermediate-wet micromodels; we visualized the scCO2 invasion morphology at pore-scale under reservoir conditions. We also performed pore-scale numerical simulations of the Navier-Stokes equations to obtain 3D details of fluid-fluid displacement processes. Simulation results are qualitatively consistent with the experiments, showing wider scCO2 fingering, higher percentage of scCO2 and more compact displacement pattern in intermediate-wet micromodel. Through quantitative analysis based on pore-scale simulation, we found that the reduced wettability reduces the displacement front velocity, promotes the pore-filling events in the longitudinal direction, delays the breakthrough time of invading fluid, and then increases the displacement efficiency. Simulated results also show that the fluid-fluid interface area follows a unified power-law relation with scCO2 saturation, and show smaller interface area in intermediate-wet case which suppresses the mass transfer between the phases. These pore-scale results provide insights for the wettability effects on CO2 - brine immiscible displacement in geologic carbon sequestration.

  2. Simulating Silvicultural Treatments Using FIA Data

    Treesearch

    Christopher W. Woodall; Carl E. Fiedler

    2005-01-01

    Potential uses of the Forest Inventory and Analysis Database (FIADB) extend far beyond descriptions and summaries of current forest resources. Silvicultural treatments, although typically conducted at the stand level, may be simulated using the FIADB for predicting future forest conditions and resources at broader scales. In this study, silvicultural prescription...

  3. A Simulated Research Problem for Undergraduate Metamorphic Petrology.

    ERIC Educational Resources Information Center

    Amenta, Roddy V.

    1984-01-01

    Presents a laboratory problem in metamorphic petrology designed to simulate a research experience. The problem deals with data on scales ranging from a geologic map to hand specimens to thin sections. Student analysis includes identifying metamorphic index minerals, locating their isograds on the map, and determining the folding sequence. (BC)

  4. Evaluating the accuracy of VEMAP daily weather data for application in crop simulations on a regional scale

    USDA-ARS?s Scientific Manuscript database

    Weather plays a critical role in eco-environmental and agricultural systems. Limited availability of meteorological records often constrains the applications of simulation models and related decision support tools. The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) provides daily weather...

  5. An Analysis of Simulated Wet Deposition of Mercury from the North American Mercury Model Intercomparison Study

    EPA Science Inventory

    A previous intercomparison of atmospheric mercury models in North America has been extended to compare simulated and observed wet deposition of mercury. Three regional-scale atmospheric mercury models were tested; CMAQ, REMSAD and TEAM. These models were each employed using thr...

  6. Simulation research on the process of large scale ship plane segmentation intelligent workshop

    NASA Astrophysics Data System (ADS)

    Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei

    2017-04-01

    Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.

  7. Percolation analysis of nonlinear structures in scale-free two-dimensional simulations

    NASA Technical Reports Server (NTRS)

    Dominik, Kurt G.; Shandarin, Sergei F.

    1992-01-01

    Results are presented of applying percolation analysis to several two-dimensional N-body models which simulate the formation of large-scale structure. Three parameters are estimated: total area (a(c)), total mass (M(C)), and percolation density (rho(c)) of the percolating structure at the percolation threshold for both unsmoothed and smoothed (with different scales L(s)) nonlinear with filamentary structures, confirming early speculations that this type of model has several features of filamentary-type distributions. Also, it is shown that, by properly applying smoothing techniques, many problems previously considered detrimental can be dealt with and overcome. Possible difficulties and prospects with the use of this method are discussed, specifically relating to techniques and methods already applied to CfA deep sky surveys. The success of this test in two dimensions and the potential for extrapolation to three dimensions is also discussed.

  8. Detector Development for the MARE Neutrino Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galeazzi, M.; Bogorin, D.; Molina, R.

    2009-12-16

    The MARE experiment is designed to measure the mass of the neutrino with sub-eV sensitivity by measuring the beta decay of {sup 187}Re with cryogenic microcalorimeters. A preliminary analysis shows that, to achieve the necessary statistics, between 10,000 and 50,000 detectors are likely necessary. We have fabricated and characterized Iridium transition edge sensors with high reproducibility and uniformity for such a large scale experiment. We have also started a full scale simulation of the experimental setup for MARE, including thermalization in the absorber, detector response, and optimum filter analysis, to understand the issues related to reaching a sub-eV sensitivity andmore » to optimize the design of the MARE experiment. We present our characterization of the Ir devices, including reproducibility, uniformity, and sensitivity, and we discuss the implementation and capabilities of our full scale simulation.« less

  9. Biodegradation modelling of a dissolved gasoline plume applying independent laboratory and field parameters

    NASA Astrophysics Data System (ADS)

    Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.

    2000-12-01

    Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.

  10. Toward Improved Parameterization of a Meso-Scale Hydrologic Model in a Discontinuous Permafrost, Boreal Forest Ecosystem

    NASA Astrophysics Data System (ADS)

    Endalamaw, A. M.; Bolton, W. R.; Young, J. M.; Morton, D.; Hinzman, L. D.

    2013-12-01

    The sub-arctic environment can be characterized as being located in the zone of discontinuous permafrost. Although the distribution of permafrost is site specific, it dominates many of the hydrologic and ecologic responses and functions including vegetation distribution, stream flow, soil moisture, and storage processes. In this region, the boundaries that separate the major ecosystem types (deciduous dominated and coniferous dominated ecosystems) as well as permafrost (permafrost verses non-permafrost) occur over very short spatial scales. One of the goals of this research project is to improve parameterizations of meso-scale hydrologic models in this environment. Using the Caribou-Poker Creeks Research Watershed (CPCRW) as the test area, simulations of the headwater catchments of varying permafrost and vegetation distributions were performed. CPCRW, located approximately 50 km northeast of Fairbanks, Alaska, is located within the zone of discontinuous permafrost and the boreal forest ecosystem. The Variable Infiltration Capacity (VIC) model was selected as the hydrologic model. In CPCRW, permafrost and coniferous vegetation is generally found on north facing slopes and valley bottoms. Permafrost free soils and deciduous vegetation is generally found on south facing slopes. In this study, hydrologic simulations using fine scale vegetation and soil parameterizations - based upon slope and aspect analysis at a 50 meter resolution - were conducted. Simulations were also conducted using downscaled vegetation from the Scenarios Network for Alaska and Arctic Planning (SNAP) (1 km resolution) and soil data sets from the Food and Agriculture Organization (FAO) (approximately 9 km resolution). Preliminary simulation results show that soil and vegetation parameterizations based upon fine scale slope/aspect analysis increases the R2 values (0.5 to 0.65 in the high permafrost (53%) basin; 0.43 to 0.56 in the low permafrost (2%) basin) relative to parameterization based on coarse scale data. These results suggest that using fine resolution parameterizations can be used to improve meso-scale hydrological modeling in this region.

  11. Detecting transitions in protein dynamics using a recurrence quantification analysis based bootstrap method.

    PubMed

    Karain, Wael I

    2017-11-28

    Proteins undergo conformational transitions over different time scales. These transitions are closely intertwined with the protein's function. Numerous standard techniques such as principal component analysis are used to detect these transitions in molecular dynamics simulations. In this work, we add a new method that has the ability to detect transitions in dynamics based on the recurrences in the dynamical system. It combines bootstrapping and recurrence quantification analysis. We start from the assumption that a protein has a "baseline" recurrence structure over a given period of time. Any statistically significant deviation from this recurrence structure, as inferred from complexity measures provided by recurrence quantification analysis, is considered a transition in the dynamics of the protein. We apply this technique to a 132 ns long molecular dynamics simulation of the β-Lactamase Inhibitory Protein BLIP. We are able to detect conformational transitions in the nanosecond range in the recurrence dynamics of the BLIP protein during the simulation. The results compare favorably to those extracted using the principal component analysis technique. The recurrence quantification analysis based bootstrap technique is able to detect transitions between different dynamics states for a protein over different time scales. It is not limited to linear dynamics regimes, and can be generalized to any time scale. It also has the potential to be used to cluster frames in molecular dynamics trajectories according to the nature of their recurrence dynamics. One shortcoming for this method is the need to have large enough time windows to insure good statistical quality for the recurrence complexity measures needed to detect the transitions.

  12. Study of discrete-particle effects in a one-dimensional plasma simulation with the Krook type collision model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Po-Yen; Chen, Liu; Institute for Fusion Theory and Simulation, Zhejiang University, 310027 Hangzhou

    2015-09-15

    The thermal relaxation time of a one-dimensional plasma has been demonstrated to scale with N{sub D}{sup 2} due to discrete particle effects by collisionless particle-in-cell (PIC) simulations, where N{sub D} is the particle number in a Debye length. The N{sub D}{sup 2} scaling is consistent with the theoretical analysis based on the Balescu-Lenard-Landau kinetic equation. However, it was found that the thermal relaxation time is anomalously shortened to scale with N{sub D} while externally introducing the Krook type collision model in the one-dimensional electrostatic PIC simulation. In order to understand the discrete particle effects enhanced by the Krook type collisionmore » model, the superposition principle of dressed test particles was applied to derive the modified Balescu-Lenard-Landau kinetic equation. The theoretical results are shown to be in good agreement with the simulation results when the collisional effects dominate the plasma system.« less

  13. The Development of Directional Decohesion Finite Elements for Multiscale Failure Analysis of Metallic Polycrystals

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Glaessgen, Edward H.

    2009-01-01

    Atomistic simulations of intergranular fracture have indicated that grain-scale crack growth in polycrystalline metals can be direction dependent. At these material length scales, the atomic environment greatly influences the nature of intergranular crack propagation, through either brittle or ductile mechanisms, that are a function of adjacent grain orientation and direction of crack propagation. Methods have been developed to obtain cohesive zone models (CZM) directly from molecular dynamics simulations. These CZMs may be incorporated into decohesion finite element formulations to simulate fracture at larger length scales. A new directional decohesion element is presented that calculates the direction of Mode I opening and incorporates a material criterion for dislocation emission based on the local crystallographic environment to automatically select the CZM that best represents crack growth. The simulation of fracture in 2-D and 3-D aluminum polycrystals is used to illustrate the effect of parameterized CZMs and the effectiveness of directional decohesion finite elements.

  14. Degradation modeling of high temperature proton exchange membrane fuel cells using dual time scale simulation

    NASA Astrophysics Data System (ADS)

    Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.

    2015-02-01

    HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.

  15. Cognitive simulation as a tool for cognitive task analysis.

    PubMed

    Roth, E M; Woods, D D; Pople, H E

    1992-10-01

    Cognitive simulations are runnable computer programs that represent models of human cognitive activities. We show how one cognitive simulation built as a model of some of the cognitive processes involved in dynamic fault management can be used in conjunction with small-scale empirical data on human performance to uncover the cognitive demands of a task, to identify where intention errors are likely to occur, and to point to improvements in the person-machine system. The simulation, called Cognitive Environment Simulation or CES, has been exercised on several nuclear power plant accident scenarios. Here we report one case to illustrate how a cognitive simulation tool such as CES can be used to clarify the cognitive demands of a problem-solving situation as part of a cognitive task analysis.

  16. Evaluation and error apportionment of an ensemble of atmospheric chemistry transport modeling systems: multivariable temporal and spatial breakdown

    EPA Science Inventory

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  17. Dispersion Analysis Using Particle Tracking Simulations Through Heterogeneity Based on Outcrop Lidar Imagery

    NASA Astrophysics Data System (ADS)

    Klise, K. A.; Weissmann, G. S.; McKenna, S. A.; Tidwell, V. C.; Frechette, J. D.; Wawrzyniec, T. F.

    2007-12-01

    Solute plumes are believed to disperse in a non-Fickian manner due to small-scale heterogeneity and variable velocities that create preferential pathways. In order to accurately predict dispersion in naturally complex geologic media, the connection between heterogeneity and dispersion must be better understood. Since aquifer properties can not be measured at every location, it is common to simulate small-scale heterogeneity with random field generators based on a two-point covariance (e.g., through use of sequential simulation algorithms). While these random fields can produce preferential flow pathways, it is unknown how well the results simulate solute dispersion through natural heterogeneous media. To evaluate the influence that complex heterogeneity has on dispersion, we utilize high-resolution terrestrial lidar to identify and model lithofacies from outcrop for application in particle tracking solute transport simulations using RWHet. The lidar scan data are used to produce a lab (meter) scale two-dimensional model that captures 2-8 mm scale natural heterogeneity. Numerical simulations utilize various methods to populate the outcrop structure captured by the lidar-based image with reasonable hydraulic conductivity values. The particle tracking simulations result in residence time distributions used to evaluate the nature of dispersion through complex media. Particle tracking simulations through conductivity fields produced from the lidar images are then compared to particle tracking simulations through hydraulic conductivity fields produced from sequential simulation algorithms. Based on this comparison, the study aims to quantify the difference in dispersion when using realistic and simplified representations of aquifer heterogeneity. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah

    As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the modelmore » size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.« less

  19. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  20. Development of mpi_EPIC model for global agroecosystem modeling

    DOE PAGES

    Kang, Shujiang; Wang, Dali; Jeff A. Nichols; ...

    2014-12-31

    Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less

  1. Thickness Measurement of Surface Attachment on Plate with Lamb Wave

    NASA Astrophysics Data System (ADS)

    Ma, Xianglong; Zhang, Yinghong; Wen, Lichao; He, Yehu

    2017-12-01

    Aiming at the thickness detection of the plate surface attachment, a nondestructive testing method based on the Lamb wave is presented. This method utilizes Lamb wave propagation characteristics of signals in a bi-layer medium to measure the surface attachment plate thickness. Propagation of Lamb wave in bi-layer elastic is modeled and analyzed. The two-dimensional simulation model of electromagnetic ultrasonic plate - scale is established. The simulation is conducted by software COMSOL for simulation analysis under different boiler scale thickness wave form curve. Through this study, the thickness of the attached material can be judged by analyzing the characteristics of the received signal when the thickness of the surface of the plate is measured.

  2. Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaite, José, E-mail: jose.gaite@upm.es

    2010-03-01

    We develop a method of multifractal analysis of N-body cosmological simulations that improves on the customary counts-in-cells method by taking special care of the effects of discreteness and large scale homogeneity. The analysis of the Mare-Nostrum simulation with our method provides strong evidence of self-similar multifractal distributions of dark matter and gas, with a halo mass function that is of Press-Schechter type but has a power-law exponent -2, as corresponds to a multifractal. Furthermore, our analysis shows that the dark matter and gas distributions are indistinguishable as multifractals. To determine if there is any gas biasing, we calculate the cross-correlationmore » coefficient, with negative but inconclusive results. Hence, we develop an effective Bayesian analysis connected with information theory, which clearly demonstrates that the gas is biased in a long range of scales, up to the scale of homogeneity. However, entropic measures related to the Bayesian analysis show that this gas bias is small (in a precise sense) and is such that the fractal singularities of both distributions coincide and are identical. We conclude that this common multifractal cosmic web structure is determined by the dynamics and is independent of the initial conditions.« less

  3. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  4. Analysis of Large Scale Spatial Variability of Soil Moisture Using a Geostatistical Method

    DTIC Science & Technology

    2010-01-25

    2010 / Accepted: 19 January 2010 / Published: 25 January 2010 Abstract: Spatial and temporal soil moisture dynamics are critically needed to...scale observed and simulated estimates of soil moisture under pre- and post-precipitation event conditions. This large scale variability is a crucial... dynamics is essential in the hydrological and meteorological modeling, improves our understanding of land surface–atmosphere interactions. Spatial and

  5. Comparing a discrete and continuum model of the intestinal crypt

    PubMed Central

    Murray, Philip J.; Walter, Alex; Fletcher, Alex G.; Edwards, Carina M.; Tindall, Marcus J.; Maini, Philip K.

    2011-01-01

    The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalisations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts. PMID:21411869

  6. An Analysis Platform for Multiscale Hydrogeologic Modeling with Emphasis on Hybrid Multiscale Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheibe, Timothy D.; Murphy, Ellyn M.; Chen, Xingyuan

    2015-01-01

    One of the most significant challenges facing hydrogeologic modelers is the disparity between those spatial and temporal scales at which fundamental flow, transport and reaction processes can best be understood and quantified (e.g., microscopic to pore scales, seconds to days) and those at which practical model predictions are needed (e.g., plume to aquifer scales, years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computational and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that modelmore » parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this paper, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flow chart (Multiscale Analysis Platform or MAP), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and may become a viable alternative to conventional single-scale models in the near future.« less

  7. An analysis platform for multiscale hydrogeologic modeling with emphasis on hybrid multiscale methods.

    PubMed

    Scheibe, Timothy D; Murphy, Ellyn M; Chen, Xingyuan; Rice, Amy K; Carroll, Kenneth C; Palmer, Bruce J; Tartakovsky, Alexandre M; Battiato, Ilenia; Wood, Brian D

    2015-01-01

    One of the most significant challenges faced by hydrogeologic modelers is the disparity between the spatial and temporal scales at which fundamental flow, transport, and reaction processes can best be understood and quantified (e.g., microscopic to pore scales and seconds to days) and at which practical model predictions are needed (e.g., plume to aquifer scales and years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computation and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that model parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this article, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flowchart (Multiscale Analysis Platform), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and also a viable alternative to conventional single-scale models in the near future. © 2014, National Ground Water Association.

  8. The scale invariant generator technique for quantifying anisotropic scale invariance

    NASA Astrophysics Data System (ADS)

    Lewis, G. M.; Lovejoy, S.; Schertzer, D.; Pecknold, S.

    1999-11-01

    Scale invariance is rapidly becoming a new paradigm for geophysics. However, little attention has been paid to the anisotropy that is invariably present in geophysical fields in the form of differential stratification and rotation, texture and morphology. In order to account for scaling anisotropy, the formalism of generalized scale invariance (GSI) was developed. Until now there has existed only a single fairly ad hoc GSI analysis technique valid for studying differential rotation. In this paper, we use a two-dimensional representation of the linear approximation to generalized scale invariance, to obtain a much improved technique for quantifying anisotropic scale invariance called the scale invariant generator technique (SIG). The accuracy of the technique is tested using anisotropic multifractal simulations and error estimates are provided for the geophysically relevant range of parameters. It is found that the technique yields reasonable estimates for simulations with a diversity of anisotropic and statistical characteristics. The scale invariant generator technique can profitably be applied to the scale invariant study of vertical/horizontal and space/time cross-sections of geophysical fields as well as to the study of the texture/morphology of fields.

  9. Perturbed redshifts from N -body simulations

    NASA Astrophysics Data System (ADS)

    Adamek, Julian

    2018-01-01

    In order to keep pace with the increasing data quality of astronomical surveys the observed source redshift has to be modeled beyond the well-known Doppler contribution. In this article I want to examine the gauge issue that is often glossed over when one assigns a perturbed redshift to simulated data generated with a Newtonian N -body code. A careful analysis reveals the presence of a correction term that has so far been neglected. It is roughly proportional to the observed length scale divided by the Hubble scale and therefore suppressed inside the horizon. However, on gigaparsec scales it can be comparable to the gravitational redshift and hence amounts to an important relativistic effect.

  10. Implementation and evaluation of an interprofessional simulation-based education program for undergraduate nursing students in operating room nursing education: a randomized controlled trial.

    PubMed

    Wang, Rongmei; Shi, Nianke; Bai, Jinbing; Zheng, Yaguang; Zhao, Yue

    2015-07-09

    The present study was designed to implement an interprofessional simulation-based education program for nursing students and evaluate the influence of this program on nursing students' attitudes toward interprofessional education and knowledge about operating room nursing. Nursing students were randomly assigned to either the interprofessional simulation-based education or traditional course group. A before-and-after study of nursing students' attitudes toward the program was conducted using the Readiness for Interprofessional Learning Scale. Responses to an open-ended question were categorized using thematic content analysis. Nursing students' knowledge about operating room nursing was measured. Nursing students from the interprofessional simulation-based education group showed statistically different responses to four of the nineteen questions in the Readiness for Interprofessional Learning Scale, reflecting a more positive attitude toward interprofessional learning. This was also supported by thematic content analysis of the open-ended responses. Furthermore, nursing students in the simulation-based education group had a significant improvement in knowledge about operating room nursing. The integrated course with interprofessional education and simulation provided a positive impact on undergraduate nursing students' perceptions toward interprofessional learning and knowledge about operating room nursing. Our study demonstrated that this course may be a valuable elective option for undergraduate nursing students in operating room nursing education.

  11. Processor farming in two-level analysis of historical bridge

    NASA Astrophysics Data System (ADS)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  12. Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Pando, Jesus

    1997-10-01

    The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)

  13. LAI (in situ, simulated, Landsat-derived, and MODIS): A comparison within an Oak-Hickory Forest Complex in southwestern Virginia, USA.

    EPA Science Inventory

    The United States Environmental Protection Agency’s Environmental Sciences and Atmospheric Modeling Analysis Divisions are investigating the viability of simulated (i.e., ‘modeled’) leaf area index (LAI) inputs into various regional and local scale air quality models. Satellite L...

  14. Re'class'ification of 'quant'ified classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Tanaka, Toshiyuki

    2009-12-01

    We discuss a classical reinterpretation of quantum-mechanics-based analysis of classical Markov chains with detailed balance, that is based on the quantum-classical correspondence. The classical reinterpretation is then used to demonstrate that it successfully reproduces a sufficient condition for cooling schedule in classical simulated annealing, which has the inverse-logarithmic scaling.

  15. TEMPORAL FEATURES IN OBSERVED AND SIMULATED METEOROLOGY AND AIR QUALITY OVER THE EASTERN UNITED STATES

    EPA Science Inventory

    In this study, temporal scale analysis is applied as a technique to evaluate an annual simulation of meteorology, O3, and PM2.5 and its chemical components over the continental U.S. utilizing two modeling systems. It is illustrated that correlations were ins...

  16. The new ATLAS Fast Calorimeter Simulation

    NASA Astrophysics Data System (ADS)

    Schaarschmidt, J.; ATLAS Collaboration

    2017-10-01

    Current and future need for large scale simulated samples motivate the development of reliable fast simulation techniques. The new Fast Calorimeter Simulation is an improved parameterized response of single particles in the ATLAS calorimeter that aims to accurately emulate the key features of the detailed calorimeter response as simulated with Geant4, yet approximately ten times faster. Principal component analysis and machine learning techniques are used to improve the performance and decrease the memory need compared to the current version of the ATLAS Fast Calorimeter Simulation. A prototype of this new Fast Calorimeter Simulation is in development and its integration into the ATLAS simulation infrastructure is ongoing.

  17. Open-source framework for power system transmission and distribution dynamics co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Fan, Rui; Daily, Jeff

    The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less

  18. CFD analysis of a full-scale ceramic kiln module under actual operating conditions

    NASA Astrophysics Data System (ADS)

    Milani, Massimo; Montorsi, Luca; Stefani, Matteo; Venturelli, Matteo

    2017-11-01

    The paper focuses on the CFD analysis of a full-scale module of an industrial ceramic kiln under actual operating conditions. The multi-dimensional analysis includes the real geometry of a ceramic kiln module employed in the preheating and firing sections and investigates the heat transfer between the tiles and the burners' flame as well as the many components that comprise the module. Particular attention is devoted to the simulation of the convective flow field in the upper and lower chambers and to the effects of radiation on the different materials is addressed. The assessment of the radiation contribution to the tiles temperature is paramount to the improvement of the performance of the kiln in terms of energy efficiency and fuel consumption. The CFD analysis is combined to a lumped and distributed parameter model of the entire kiln in order to simulate the module behaviour at the boundaries under actual operating conditions. Finally, the CFD simulation is employed to address the effects of the module operating conditions on the tiles' temperature distribution in order to improve the temperature uniformity as well as to enhance the energy efficiency of the system and thus to reduce the fuel consumption.

  19. Influence of foam on the stability characteristics of immiscible flow in porous media

    NASA Astrophysics Data System (ADS)

    van der Meer, J. M.; Farajzadeh, R.; Rossen, W. R.; Jansen, J. D.

    2018-01-01

    Accurate field-scale simulations of foam enhanced oil recovery are challenging, due to the sharp transition between gas and foam. Hence, unpredictable numerical and physical behavior is often observed, casting doubt on the validity of the simulation results. In this paper, a thorough stability analysis of the foam model is presented to validate the simulation results. We study the effect of a strongly non-monotonous total mobility function arising from foam models on the stability characteristics of the flow. To this end, we apply the linear stability analysis to nearly discontinuous relative permeability functions and compare the results with those of highly accurate numerical simulations. In addition, we present a qualitative analysis of the effect of different reservoir and fluid properties on the foam fingering behavior. In particular, we consider the effect of heterogeneity of the reservoir, injection rates, and foam quality. Relative permeability functions play an important role in the onset of fingering behavior of the injected fluid. Hence, we can deduce that stability properties are highly dependent on the non-linearity of the foam transition. The foam-water interface is governed by a very small total mobility ratio, implying a stable front. The transition between gas and foam, however, exhibits a huge total mobility ratio, leading to instabilities in the form of viscous fingering. This implies that there is an unstable pattern behind the front. We deduce that instabilities are able to grow behind the front but are later absorbed by the expanding wave. Moreover, the stability analysis, validated by numerical simulations, provides valuable insights about the important scales and wavelengths of the foam model. In this way, we remove the ambiguity regarding the effect of grid resolution on the convergence of the solutions. This insight forms an essential step toward the design of a suitable computational solver that captures all the appropriate scales, while retaining computational efficiency.

  20. Performance assessment of retrospective meteorological inputs for use in air quality modeling during TexAQS 2006

    NASA Astrophysics Data System (ADS)

    Ngan, Fong; Byun, Daewon; Kim, Hyuncheol; Lee, Daegyun; Rappenglück, Bernhard; Pour-Biazar, Arastoo

    2012-07-01

    To achieve more accurate meteorological inputs than was used in the daily forecast for studying the TexAQS 2006 air quality, retrospective simulations were conducted using objective analysis and 3D/surface analysis nudging with surface and upper observations. Model ozone using the assimilated meteorological fields with improved wind fields shows better agreement with the observation compared to the forecasting results. In the post-frontal conditions, important factors for ozone modeling in terms of wind patterns are the weak easterlies in the morning for bringing in industrial emissions to the city and the subsequent clockwise turning of the wind direction induced by the Coriolis force superimposing the sea breeze, which keeps pollutants in the urban area. Objective analysis and nudging employed in the retrospective simulation minimize the wind bias but are not able to compensate for the general flow pattern biases inherited from large scale inputs. By using an alternative analyses data for initializing the meteorological simulation, the model can re-produce the flow pattern and generate the ozone peak location closer to the reality. The inaccurate simulation of precipitation and cloudiness cause over-prediction of ozone occasionally. Since there are limitations in the meteorological model to simulate precipitation and cloudiness in the fine scale domain (less than 4-km grid), the satellite-based cloud is an alternative way to provide necessary inputs for the retrospective study of air quality.

  1. Similarities between principal components of protein dynamics and random diffusion

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2000-12-01

    Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.

  2. Simulators for Maintenance Training: Some Issues, Problems and Areas for Future Research

    DTIC Science & Technology

    1978-07-01

    trainer into a full-scale, three-dimensional simulation of one cabinet of the NIKE HIPAR system. Test points for troubleshooting were located on simulated...described was used to teach maintenance of the NIKE HIPAR system. It too was considered to be a general purpose trainer in that its basic features could be...types of maintenance simulators based on a detailed task analysis of the NIKE HIPAR system as it existed one year before it was scheduled to become

  3. Wind-tunnel simulation of store jettison with the aid of magnetic artificial gravity

    NASA Technical Reports Server (NTRS)

    Stephens, T.; Adams, R.

    1972-01-01

    A method employed in the simulation of jettison of stores from aircraft involving small scale wind-tunnel drop tests from a model of the parent aircraft is described. Proper scaling of such experiments generally dictates that the gravitational acceleration should ideally be a test variable. A method of introducing a controllable artificial component of gravity by magnetic means has been proposed. The use of a magnetic artificial gravity facility based upon this idea, in conjunction with small scale wind-tunnel drop tests, would improve the accuracy of simulation. A review of the scaling laws as they apply to the design of such a facility is presented. The design constraints involved in the integration of such a facility with a wind tunnel are defined. A detailed performance analysis procedure applicable to such a facility is developed. A practical magnet configuration is defined which is capable of controlling the strength and orientation of the magnetic artificial gravity field in the vertical plane, thereby allowing simulation of store jettison from a diving or climbing aircraft. The factors involved in the choice between continuous or intermittent operation of the facility, and the use of normal or superconducting magnets, are defined.

  4. Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2011-01-01

    In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less

  5. Effect of filter type on the statistics of energy transfer between resolved and subfilter scales from a-priori analysis of direct numerical simulations of isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Buzzicotti, M.; Linkmann, M.; Aluie, H.; Biferale, L.; Brasseur, J.; Meneveau, C.

    2018-02-01

    The effects of different filtering strategies on the statistical properties of the resolved-to-subfilter scale (SFS) energy transfer are analysed in forced homogeneous and isotropic turbulence. We carry out a-priori analyses of the statistical characteristics of SFS energy transfer by filtering data obtained from direct numerical simulations with up to 20483 grid points as a function of the filter cutoff scale. In order to quantify the dependence of extreme events and anomalous scaling on the filter, we compare a sharp Fourier Galerkin projector, a Gaussian filter and a novel class of Galerkin projectors with non-sharp spectral filter profiles. Of interest is the importance of Galilean invariance and we confirm that local SFS energy transfer displays intermittency scaling in both skewness and flatness as a function of the cutoff scale. Furthermore, we quantify the robustness of scaling as a function of the filtering type.

  6. Scaling and stochastic cascade properties of NEMO oceanic simulations and their potential value for GCM evaluation and downscaling

    NASA Astrophysics Data System (ADS)

    Verrier, Sébastien; Crépon, Michel; Thiria, Sylvie

    2014-09-01

    Spectral scaling properties have already been evidenced on oceanic numerical simulations and have been subject to several interpretations. They can be used to evaluate classical turbulence theories that predict scaling with specific exponents and to evaluate the quality of GCM outputs from a statistical and multiscale point of view. However, a more complete framework based on multifractal cascades is able to generalize the classical but restrictive second-order spectral framework to other moment orders, providing an accurate description of probability distributions of the fields at multiple scales. The predictions of this formalism still needed systematic verification in oceanic GCM while they have been confirmed recently for their atmospheric counterparts by several papers. The present paper is devoted to a systematic analysis of several oceanic fields produced by the NEMO oceanic GCM. Attention is focused to regional, idealized configurations that permit to evaluate the NEMO engine core from a scaling point of view regardless of limitations involved by land masks. Based on classical multifractal analysis tools, multifractal properties were evidenced for several oceanic state variables (sea surface temperature and salinity, velocity components, etc.). While first-order structure functions estimated a different nonconservativity parameter H in two scaling ranges, the multiorder statistics of turbulent fluxes were scaling over almost the whole available scaling range. This multifractal scaling was then parameterized with the help of the universal multifractal framework, providing parameters that are coherent with existing empirical literature. Finally, we argue that the knowledge of these properties may be useful for oceanographers. The framework seems very well suited for the statistical evaluation of OGCM outputs. Moreover, it also provides practical solutions to simulate subpixel variability stochastically for GCM downscaling purposes. As an independent perspective, the existence of multifractal properties in oceanic flows seems also interesting for investigating scale dependencies in remote sensing inversion algorithms.

  7. Multi-scale Modeling of Radiation Damage: Large Scale Data Analysis

    NASA Astrophysics Data System (ADS)

    Warrier, M.; Bhardwaj, U.; Bukkuru, S.

    2016-10-01

    Modification of materials in nuclear reactors due to neutron irradiation is a multiscale problem. These neutrons pass through materials creating several energetic primary knock-on atoms (PKA) which cause localized collision cascades creating damage tracks, defects (interstitials and vacancies) and defect clusters depending on the energy of the PKA. These defects diffuse and recombine throughout the whole duration of operation of the reactor, thereby changing the micro-structure of the material and its properties. It is therefore desirable to develop predictive computational tools to simulate the micro-structural changes of irradiated materials. In this paper we describe how statistical averages of the collision cascades from thousands of MD simulations are used to provide inputs to Kinetic Monte Carlo (KMC) simulations which can handle larger sizes, more defects and longer time durations. Use of unsupervised learning and graph optimization in handling and analyzing large scale MD data will be highlighted.

  8. A History of Full-Scale Aircraft and Rotorcraft Crash Testing and Simulation at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Boitnott, Richard L.; Fasanella, Edwin L.; Jones, Lisa E.; Lyle, Karen H.

    2004-01-01

    This paper summarizes 2-1/2 decades of full-scale aircraft and rotorcraft crash testing performed at the Impact Dynamics Research Facility (IDRF) located at NASA Langley Research Center in Hampton, Virginia. The IDRF is a 240-ft.-high steel gantry that was built originally as a lunar landing simulator facility in the early 1960's. It was converted into a full-scale crash test facility for light aircraft and rotorcraft in the early 1970 s. Since the first full-scale crash test was preformed in February 1974, the IDRF has been used to conduct: 41 full-scale crash tests of General Aviation (GA) aircraft including landmark studies to establish baseline crash performance data for metallic and composite GA aircraft; 11 full-scale crash tests of helicopters including crash qualification tests of the Bell and Sikorsky Advanced Composite Airframe Program (ACAP) prototypes; 48 Wire Strike Protection System (WSPS) qualification tests of Army helicopters; 3 vertical drop tests of Boeing 707 transport aircraft fuselage sections; and, 60+ crash tests of the F-111 crew escape module. For some of these tests, nonlinear transient dynamic codes were utilized to simulate the impact response of the airframe. These simulations were performed to evaluate the capabilities of the analytical tools, as well as to validate the models through test-analysis correlation. In September 2003, NASA Langley closed the IDRF facility and plans are underway to demolish it in 2007. Consequently, it is important to document the contributions made to improve the crashworthiness of light aircraft and rotorcraft achieved through full-scale crash testing and simulation at the IDRF.

  9. Scale Dependence of Land Atmosphere Interactions in Wet and Dry Regions as Simulated with NU-WRF over the Southwestern and Southeast US

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Wu, Di; Lau, K.- M.; Tao, Wei-Kuo

    2016-01-01

    Large-scale forcing and land-atmosphere interactions on precipitation are investigated with NASA-Unified WRF (NU-WRF) simulations during fast transitions of ENSO phases from spring to early summer of 2010 and 2011. The model is found to capture major precipitation episodes in the 3-month simulations without resorting to nudging. However, the mean intensity of the simulated precipitation is underestimated by 46% and 57% compared with the observations in dry and wet regions in the southwestern and south-central United States, respectively. Sensitivity studies show that large-scale atmospheric forcing plays a major role in producing regional precipitation. A methodology to account for moisture contributions to individual precipitation events, as well as total precipitation, is presented under the same moisture budget framework. The analysis shows that the relative contributions of local evaporation and large-scale moisture convergence depend on the dry/wet regions and are a function of temporal and spatial scales. While the ratio of local and large-scale moisture contributions vary with domain size and weather system, evaporation provides a major moisture source in the dry region and during light rain events, which leads to greater sensitivity to soil moisture in the dry region and during light rain events. The feedback of land surface processes to large-scale forcing is well simulated, as indicated by changes in atmospheric circulation and moisture convergence. Overall, the results reveal an asymmetrical response of precipitation events to soil moisture, with higher sensitivity under dry than wet conditions. Drier soil moisture tends to suppress further existing below-normal precipitation conditions via a positive soil moisture-land surface flux feedback that could worsen drought conditions in the southwestern United States.

  10. A Framework for Daylighting Optimization in Whole Buildings with OpenStudio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-08-12

    We present a toolkit and workflow for leveraging the OpenStudio (Guglielmetti et al. 2010) platform to perform daylighting analysis and optimization in a whole building energy modeling (BEM) context. We have re-implemented OpenStudio's integrated Radiance and EnergyPlus functionality as an OpenStudio Measure. The OpenStudio Radiance Measure works within the OpenStudio Application and Parametric Analysis Tool, as well as the OpenStudio Server large scale analysis framework, allowing a rigorous daylighting simulation to be performed on a single building model or potentially an entire population of programmatically generated models. The Radiance simulation results can automatically inform the broader building energy model, andmore » provide dynamic daylight metrics as a basis for decision. Through introduction and example, this paper illustrates the utility of the OpenStudio building energy modeling platform to leverage existing simulation tools for integrated building energy performance simulation, daylighting analysis, and reportage.« less

  11. Numerical models for fluid-grains interactions: opportunities and limitations

    NASA Astrophysics Data System (ADS)

    Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony

    2017-06-01

    In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.

  12. Multi-Scale Impact and Compression-After-Impact Modeling of Reinforced Benzoxazine/Epoxy Composites using Micromechanics Approach

    NASA Astrophysics Data System (ADS)

    Montero, Marc Villa; Barjasteh, Ehsan; Baid, Harsh K.; Godines, Cody; Abdi, Frank; Nikbin, Kamran

    A multi-scale micromechanics approach along with finite element (FE) model predictive tool is developed to analyze low-energy-impact damage footprint and compression-after-impact (CAI) of composite laminates which is also tested and verified with experimental data. Effective fiber and matrix properties were reverse-engineered from lamina properties using an optimization algorithm and used to assess damage at the micro-level during impact and post-impact FE simulations. Progressive failure dynamic analysis (PFDA) was performed for a two step-process simulation. Damage mechanisms at the micro-level were continuously evaluated during the analyses. Contribution of each failure mode was tracked during the simulations and damage and delamination footprint size and shape were predicted to understand when, where and why failure occurred during both impact and CAI events. The composite laminate was manufactured by the vacuum infusion of the aero-grade toughened Benzoxazine system into the fabric preform. Delamination footprint was measured using C-scan data from the impacted panels and compared with the predicated values obtained from proposed multi-scale micromechanics coupled with FE analysis. Furthermore, the residual strength was predicted from the load-displacement curve and compared with the experimental values as well.

  13. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: The case of domain motions

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-01

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  14. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: the case of domain motions.

    PubMed

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-14

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  15. Lee-Yang zero analysis for the study of QCD phase structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ejiri, Shinji

    2006-03-01

    We comment on the Lee-Yang zero analysis for the study of the phase structure of QCD at high temperature and baryon number density by Monte-Carlo simulations. We find that the sign problem for nonzero density QCD induces a serious problem in the finite volume scaling analysis of the Lee-Yang zeros for the investigation of the order of the phase transition. If the sign problem occurs at large volume, the Lee-Yang zeros will always approach the real axis of the complex parameter plane in the thermodynamic limit. This implies that a scaling behavior which would suggest a crossover transition will notmore » be obtained. To clarify this problem, we discuss the Lee-Yang zero analysis for SU(3) pure gauge theory as a simple example without the sign problem, and then consider the case of nonzero density QCD. It is suggested that the distribution of the Lee-Yang zeros in the complex parameter space obtained by each simulation could be more important information for the investigation of the critical endpoint in the (T,{mu}{sub q}) plane than the finite volume scaling behavior.« less

  16. Towards wall functions for the prediction of solute segregation in plane front directional solidification

    NASA Astrophysics Data System (ADS)

    Chatelain, M.; Rhouzlane, S.; Botton, V.; Albaric, M.; Henry, D.; Millet, S.; Pelletier, D.; Garandet, J. P.

    2017-10-01

    The present paper focuses on solute segregation occurring in directional solidification processes with sharp solid/liquid interface, like silicon crystal growth. A major difficulty for the simulation of such processes is their inherently multi-scale nature: the impurity segregation problem is controlled at the solute boundary layer scale (micrometers) while the thermal problem is ruled at the crucible scale (meters). The thickness of the solute boundary layer is controlled by the convection regime and requires a specific refinement of the mesh of numerical models. In order to improve numerical simulations, wall functions describing solute boundary layers for convecto-diffusive regimes are derived from a scaling analysis. The aim of these wall functions is to obtain segregation profiles from purely thermo-hydrodynamic simulations, which do not require solute boundary layer refinement at the solid/liquid interface. Regarding industrial applications, various stirring techniques can be used to enhance segregation, leading to fully turbulent flows in the melt. In this context, the scaling analysis is further improved by taking into account the turbulent solute transport. The solute boundary layers predicted by the analytical model are compared to those obtained by transient segregation simulations in a canonical 2D lid driven cavity configuration for validation purposes. Convective regimes ranging from laminar to fully turbulent are considered. Growth rate and molecular diffusivity influences are also investigated. Then, a procedure to predict concentration fields in the solid phase from a hydrodynamic simulation of the solidification process is proposed. This procedure is based on the analytical wall functions and on solute mass conservation. It only uses wall shear-stress profiles at the solidification front as input data. The 2D analytical concentration fields are directly compared to the results of the complete simulation of segregation in the lid driven cavity configuration. Finally, an additional output from the analytical model is also presented. We put in light the correlation between different species convecto-diffusive behaviour; we use it to propose an estimation method for the segregation parameters of various chemical species knowing segregation parameters of one specific species.

  17. Generation of dense granular deposits for porosity analysis: assessment and application of large-scale non-smooth granular dynamics

    NASA Astrophysics Data System (ADS)

    Schruff, T.; Liang, R.; Rüde, U.; Schüttrumpf, H.; Frings, R. M.

    2018-01-01

    The knowledge of structural properties of granular materials such as porosity is highly important in many application-oriented and scientific fields. In this paper we present new results of computer-based packing simulations where we use the non-smooth granular dynamics (NSGD) method to simulate gravitational random dense packing of spherical particles with various particle size distributions and two types of depositional conditions. A bin packing scenario was used to compare simulation results to laboratory porosity measurements and to quantify the sensitivity of the NSGD regarding critical simulation parameters such as time step size. The results of the bin packing simulations agree well with laboratory measurements across all particle size distributions with all absolute errors below 1%. A large-scale packing scenario with periodic side walls was used to simulate the packing of up to 855,600 spherical particles with various particle size distributions (PSD). Simulation outcomes are used to quantify the effect of particle-domain-size ratio on the packing compaction. A simple correction model, based on the coordination number, is employed to compensate for this effect on the porosity and to determine the relationship between PSD and porosity. Promising accuracy and stability results paired with excellent computational performance recommend the application of NSGD for large-scale packing simulations, e.g. to further enhance the generation of representative granular deposits.

  18. Tests of peak flow scaling in simulated self-similar river networks

    USGS Publications Warehouse

    Menabde, M.; Veitzer, S.; Gupta, V.; Sivapalan, M.

    2001-01-01

    The effect of linear flow routing incorporating attenuation and network topology on peak flow scaling exponent is investigated for an instantaneously applied uniform runoff on simulated deterministic and random self-similar channel networks. The flow routing is modelled by a linear mass conservation equation for a discrete set of channel links connected in parallel and series, and having the same topology as the channel network. A quasi-analytical solution for the unit hydrograph is obtained in terms of recursion relations. The analysis of this solution shows that the peak flow has an asymptotically scaling dependence on the drainage area for deterministic Mandelbrot-Vicsek (MV) and Peano networks, as well as for a subclass of random self-similar channel networks. However, the scaling exponent is shown to be different from that predicted by the scaling properties of the maxima of the width functions. ?? 2001 Elsevier Science Ltd. All rights reserved.

  19. The role of plasma density scale length on the laser pulse propagation and scattering in relativistic regime

    NASA Astrophysics Data System (ADS)

    Pishdast, Masoud; Ghasemi, Seyed Abolfazl; Yazdanpanah, Jamal Aldin

    2017-10-01

    The role of plasma density scale length on two short and long laser pulse propagation and scattering in under dense plasma have been investigated in relativistic regime using 1 D PIC simulation. In our simulation, different density scale lengths and also two short and long pulse lengths with temporal pulse duration τL = 60 fs and τL = 300 fs , respectively have been used. It is found that laser pulse length and density scale length have considerable effects on the energetic electron generation. The analysis of total radiation spectrum reveals that, for short laser pulses and with reducing density scale length, more unstable electromagnetic modes grow and strong longitudinal electric field generates which leads to the generation of more energetic plasma particles. Meanwhile, the dominant scattering mechanism is Raman scattering and tends to Thomson scattering for longer laser pulse.

  20. A pilot study on the Chinese Minnesota Multiphasic Personality Inventory-2 in detecting feigned mental disorders: Simulators classified by using the Structured Interview of Reported Symptoms.

    PubMed

    Chang, Yi-Ting; Tam, Wai-Cheong C; Shiah, Yung-Jong; Chiang, Shih-Kuang

    2017-09-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) is often used in forensic psychological/psychiatric assessment. This was a pilot study on the utility of the Chinese MMPI-2 in detecting feigned mental disorders. The sample consisted of 194 university students who were either simulators (informed or uninformed) or controls. All the participants were administered the Chinese MMPI-2 and the Structured Interview of Reported Symptoms-2 (SIRS-2). The results of the SIRS-2 were utilized to classify the participants into the feigning or control groups. The effectiveness of eight detection indices was investigated by using item analysis, multivariate analysis of covariance (MANCOVA), and receiver operating characteristic (ROC) analysis. Results indicated that informed-simulating participants with prior knowledge of mental disorders did not perform better in avoiding feigning detection than uninformed-simulating participants. In addition, the eight detection indices of the Chinese MMPI-2 were effective in discriminating participants in the feigning and control groups, and the best cut-off scores of three of the indices were higher than those obtained from the studies using the English MMPI-2. Thus, in this sample of university students, the utility of the Chinese MMPI-2 in detecting feigned mental disorders was tentatively supported, and the Chinese Infrequency Scale (ICH), a scale developed specifically for the Chinese MMPI-2, was also supported as a valid scale for validity checking. © 2017 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  1. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  2. Assessment of the relationship between chlorophyll fluorescence and photosynthesis across scales from measurements and simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Guanter, L.; Berry, J. A.; Tol, C. V. D.

    2016-12-01

    Solar-induced chlorophyll fluorescence (SIF) is a novel optical tool for assessment of terrestrial photosynthesis (GPP). Recent work have shown the strong link between GPP and satellite retrievals of SIF at broad scales. However, critical gaps remain between short term small-scale mechanistic understanding and seasonal global observations. In this presentation, we provide a model-based analysis of the relationship between SIF and GPP across scales for diverse vegetation types and a range of meteorological conditions, with the ultimate focus on reproducing the environmental conditions during remote sensing measurements. The coupled fluorescence-photosynthesis model SCOPE is used to simulate GPP and SIF at the both leaf and canopy levels for 13 flux sites. Analyses were conducted to investigate the effects of temporal scaling, canopy structure, overpass time, and spectral domain on the relationship between SIF and GPP. The simulated SIF is highly non-linear with GPP at the leaf level and instantaneous time scale and tends to linearize when scaling to the canopy level and daily to seasonal scales. These relationships are consistent across a wide range of vegetation types. The relationship between SIF and GPP is primarily driven by absorbed photosynthetically active radiation (APAR), especially at the seasonal scale, although the photosynthetic efficiency also contributes to strengthen the link between them. The linearization of their relationship from leaf to canopy and averaging over time is because the overall conditions of the canopy fall within the range of the linear responses of GPP and SIF to light and the photosynthetic capacity. Our results further show that the top-of-canopy relationships between simulated SIF and GPP have similar linearity regardless of whether we used the morning or midday satellite overpass times. These findings are confirmed by field measurements. In addition, the simulated red SIF at 685 nm has a similar relationship with GPP as that of far-red SIF at 740 nm at the canopy level.

  3. Thermal photons in heavy ion collisions at 158 A GeV

    NASA Astrophysics Data System (ADS)

    Dutt, Sunil

    2018-05-01

    The essence of experimental ultra-relativistic heavy ion collision physics is the production and study of strongly interacting matter at extreme energy densities, temperatures and consequent search for equation of state of nuclear matter. The focus of the analysis has been to examine pseudo-rapidity distributions obtained for the γ-like particles in pre-shower photon multiplicity detector. This allows the extension of scaled factorial moment analysis to bin sizes smaller than those accessible to other experimental techniques. Scaled factorial moments are calculated using horizontal corrected and vertical analysis. The results are compared with simulation analysis using VENUS event generator.

  4. Synchrotron radiation x-ray topography and defect selective etching analysis of threading dislocations in GaN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sintonen, Sakari, E-mail: sakari.sintonen@aalto.fi; Suihkonen, Sami; Jussila, Henri

    2014-08-28

    The crystal quality of bulk GaN crystals is continuously improving due to advances in GaN growth techniques. Defect characterization of the GaN substrates by conventional methods is impeded by the very low dislocation density and a large scale defect analysis method is needed. White beam synchrotron radiation x-ray topography (SR-XRT) is a rapid and non-destructive technique for dislocation analysis on a large scale. In this study, the defect structure of an ammonothermal c-plane GaN substrate was recorded using SR-XRT and the image contrast caused by the dislocation induced microstrain was simulated. The simulations and experimental observations agree excellently and themore » SR-XRT image contrasts of mixed and screw dislocations were determined. Apart from a few exceptions, defect selective etching measurements were shown to correspond one to one with the SR-XRT results.« less

  5. Climate and weather across scales: singularities and stochastic Levy-Clifford algebra

    NASA Astrophysics Data System (ADS)

    Schertzer, Daniel; Tchiguirinskaia, Ioulia

    2016-04-01

    There have been several attempts to understand and simulate the fluctuations of weather and climate across scales. Beyond mono/uni-scaling approaches (e.g. using spectral analysis), this was done with the help of multifractal techniques that aim to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations of these equations (Royer et al., 2008, Lovejoy and Schertzer, 2013). However, these techniques were limited to deal with scalar fields, instead of dealing directly with a system of complex interactions and non trivial symmetries. The latter is unfortunately indispensable to answer to the challenging question of being able to assess the climatology of (exo-) planets based on first principles (Pierrehumbert, 2013) or to fully address the question of the relevance of quasi-geostrophic turbulence and to define an effective, fractal dimension of the atmospheric motions (Schertzer et al., 2012). In this talk, we present a plausible candidate based on the combination of Lévy stable processes and Clifford algebra. Together they combine stochastic and structural properties that are strongly universal. They therefore define with the help of a few physically meaningful parameters a wide class of stochastic symmetries, as well as high dimensional vector- or manifold-valued fields respecting these symmetries (Schertzer and Tchiguirinskaia, 2015). Lovejoy, S. & Schertzer, D., 2013. The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge U.K. Cambridge Univeristy Press. Pierrehumbert, R.T., 2013. Strange news from other stars. Nature Geoscience, 6(2), pp.81-83. Royer, J.F. et al., 2008. Multifractal analysis of the evolution of simulated precipitation over France in a climate scenario. C.R. Geoscience, 340(431-440). Schertzer, D. et al., 2012. Quasi-geostrophic turbulence and generalized scale invariance, a theoretical reply. Atmos. Chem. Phys., 12, pp.327-336. Schertzer, D. & Tchiguirinskaia, I., 2015. Multifractal vector fields and stochastic Clifford algebra. Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(12), p.123127.

  6. Scaling laws and dynamics of bubble coalescence

    NASA Astrophysics Data System (ADS)

    Anthony, Christopher R.; Kamat, Pritish M.; Thete, Sumeet S.; Munro, James P.; Lister, John R.; Harris, Michael T.; Basaran, Osman A.

    2017-08-01

    The coalescence of bubbles and drops plays a central role in nature and industry. During coalescence, two bubbles or drops touch and merge into one as the neck connecting them grows from microscopic to macroscopic scales. The hydrodynamic singularity that arises when two bubbles or drops have just touched and the flows that ensue have been studied thoroughly when two drops coalesce in a dynamically passive outer fluid. In this paper, the coalescence of two identical and initially spherical bubbles, which are idealized as voids that are surrounded by an incompressible Newtonian liquid, is analyzed by numerical simulation. This problem has recently been studied (a) experimentally using high-speed imaging and (b) by asymptotic analysis in which the dynamics is analyzed by determining the growth of a hole in the thin liquid sheet separating the two bubbles. In the latter, advantage is taken of the fact that the flow in the thin sheet of nonconstant thickness is governed by a set of one-dimensional, radial extensional flow equations. While these studies agree on the power law scaling of the variation of the minimum neck radius with time, they disagree with respect to the numerical value of the prefactors in the scaling laws. In order to reconcile these differences and also provide insights into the dynamics that are difficult to probe by either of the aforementioned approaches, simulations are used to access both earlier times than has been possible in the experiments and also later times when asymptotic analysis is no longer applicable. Early times and extremely small length scales are attained in the new simulations through the use of a truncated domain approach. Furthermore, it is shown by direct numerical simulations in which the flow within the bubbles is also determined along with the flow exterior to them that idealizing the bubbles as passive voids has virtually no effect on the scaling laws relating minimum neck radius and time.

  7. Lightweight computational steering of very large scale molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less

  8. Scaling Analysis of Ocean Surface Turbulent Heterogeneities from Satellite Remote Sensing: Use of 2D Structure Functions.

    PubMed

    Renosh, P R; Schmitt, Francois G; Loisel, Hubert

    2015-01-01

    Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.

  9. Multivariate analysis of scale-dependent associations between bats and landscape structure

    USGS Publications Warehouse

    Gorresen, P.M.; Willig, M.R.; Strauss, R.E.

    2005-01-01

    The assessment of biotic responses to habitat disturbance and fragmentation generally has been limited to analyses at a single spatial scale. Furthermore, methods to compare responses between scales have lacked the ability to discriminate among patterns related to the identity, strength, or direction of associations of biotic variables with landscape attributes. We present an examination of the relationship of population- and community-level characteristics of phyllostomid bats with habitat features that were measured at multiple spatial scales in Atlantic rain forest of eastern Paraguay. We used a matrix of partial correlations between each biotic response variable (i.e., species abundance, species richness, and evenness) and a suite of landscape characteristics to represent the multifaceted associations of bats with spatial structure. Correlation matrices can correspond based on either the strength (i.e., magnitude) or direction (i.e., sign) of association. Therefore, a simulation model independently evaluated correspondence in the magnitude and sign of correlations among scales, and results were combined via a meta-analysis to provide an overall test of significance. Our approach detected both species-specific differences in response to landscape structure and scale dependence in those responses. This matrix-simulation approach has broad applicability to ecological situations in which multiple intercorrelated factors contribute to patterns in space or time. ?? 2005 by the Ecological Society of America.

  10. Unveiling the Role of the Magnetic Field at the Smallest Scales of Star Formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hull, Charles L. H.; Mocz, Philip; Burkhart, Blakesley

    We report Atacama Large Millimeter/submillimeter Array (ALMA) observations of polarized dust emission from the protostellar source Ser-emb 8 at a linear resolution of 140 au. Assuming models of dust-grain alignment hold, the observed polarization pattern gives a projected view of the magnetic field structure in this source. Contrary to expectations based on models of strongly magnetized star formation, the magnetic field in Ser-emb 8 does not exhibit an hourglass morphology. Combining the new ALMA data with previous observational studies, we can connect magnetic field structure from protostellar core (∼80,000 au) to disk (∼100 au) scales. We compare our observations withmore » four magnetohydrodynamic gravo-turbulence simulations made with the AREPO code that have initial conditions ranging from super-Alfvénic (weakly magnetized) to sub-Alfvénic (strongly magnetized). These simulations achieve the spatial dynamic range necessary to resolve the collapse of protostars from the parsec scale of star-forming clouds down to the ∼100 au scale probed by ALMA. Only in the very strongly magnetized simulation do we see both the preservation of the field direction from cloud to disk scales and an hourglass-shaped field at <1000 au scales. We conduct an analysis of the relative orientation of the magnetic field and the density structure in both the Ser-emb 8 ALMA observations and the synthetic observations of the four AREPO simulations. We conclude that the Ser-emb 8 data are most similar to the weakly magnetized simulations, which exhibit random alignment, in contrast to the strongly magnetized simulation, where the magnetic field plays a role in shaping the density structure in the source. In the weak-field case, it is turbulence—not the magnetic field—that shapes the material that forms the protostar, highlighting the dominant role that turbulence can play across many orders of magnitude in spatial scale.« less

  11. Non-linear clustering in the cold plus hot dark matter model

    NASA Astrophysics Data System (ADS)

    Bonometto, Silvio A.; Borgani, Stefano; Ghigna, Sebastiano; Klypin, Anatoly; Primack, Joel R.

    1995-03-01

    The main aim of this work is to find out if hierarchical scaling, observed in galaxy clustering, can be dynamically explained by studying N-body simulations. Previous analyses of dark matter (DM) particle distributions indicated heavy distortions with respect to the hierarchical pattern. Here, we shall describe how such distortions are to be interpreted and why they can be fully reconciled with the observed galaxy clustering. This aim is achieved by using high-resolution (512^3 grid-points) particle-mesh (PM) N-body simulations to follow the development of non-linear clustering in a Omega=1 universe, dominated either by cold dark matter (CDM) or by a mixture of cold+hot dark matter (CHDM) with Omega_cold=0.6, and Omega_hot=0.3 and Omega_baryon=0.1 a simulation box of side 100 Mpc (h=0.5) is used. We analyse two CHDM realizations with biasing factor b=1.5 (COBE normalization), starting from different initial random numbers, and compare them with CDM simulations with b=1 (COBE-compatible) and b=1.5. We evaluate high-order correlation functions and the void probability function (VPF). Correlation functions are obtained from both counts in cells and counts of neighbours. The analysis is carried out for DM particles and for galaxies identified as massive haloes of the evolved density field. We confirm that clustering of DM particles systematically exhibits deviations from hierarchical scaling, although the deviation increases somewhat in redshift space. Deviations from the hierarchical scaling of DM particles are found to be related to the spectrum shape, in a way that indicates that such distortions arise from finite sampling effects. We identify galaxy positions in the simulations and show that, quite differently from the DM particle background, galaxies follow hierarchical scaling (S_q=xi_q/& xgr^q-1_2=consta nt) far more closely, with reduced skewness and kurtosis coefficients S_3~2.5 and S_4~7.5, in general agreement with observational results. Unlike DM, the scaling of galaxy clustering is must marginally affected by redshift distortions and is obtained for both CDM and CHDM models. Hierarchical scaling in simulations is confirmed by VPF analysis. Also in this case, we find substantial agreement with observational findings.

  12. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  13. SCALE Code System 6.2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  14. Numerical simulation of tornadoes' meteorological conditions over Greece: A case study of tornadic activity over NW Peloponnese on March 25, 2009

    NASA Astrophysics Data System (ADS)

    Matsangouras, Ioannis T.; Nastos, Panagiotis T.; Pytharoulis, Ioannis

    2014-05-01

    Recent research revealed that NW Peloponnese, Greece is an area that favours pre-frontal tornadic incidence. This study presents the results of the synoptic analysis of the meteorological conditions during a tornado event over NW Peloponnese on March 25, 2009. Further, the role of topography in tornado genesis is examined. The tornado was formed approximately at 10:30 UTC, south-west of Vardas village, crossed the Nea Manolada and faded away at Lappas village, causing several damage. The length of its track was approximately 9-10 km and this tornado was characterized as F2 (Fujita scale) or T4-T5 in TORRO intensity scale. Synoptic analysis was based on ECMWF datasets, as well as on daily composite mean and anomaly of the geopotential heights at the middle and lower troposphere from NCEP/NCAR reanalysis. In addition, numerous datasets derived from weather observations and remote sensing were used in order to interpret better the examined extreme event. Finally, a numerical simulation was performed using the non-hydrostatic Weather Research and Forecasting model (WRF), initialized with ECMWF gridded analyses, with telescoping nested grids that allow the representation of atmospheric circulations ranging from the synoptic scale down to the meso-scale. In the numerical simulations the topography of the inner grid was modified by: a) 0% (actual topography) and b) -100% (without topography).

  15. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator.

    PubMed

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane; Crochet, Patrice

    2018-01-01

    Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room.

  16. Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs

    NASA Astrophysics Data System (ADS)

    Gratadour, Damien

    2011-09-01

    Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.

  17. Using Multi-scale Dynamic Rupture Models to Improve Ground Motion Estimates: ALCF-2 Early Science Program Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ely, Geoffrey P.

    2013-10-31

    This project uses dynamic rupture simulations to investigate high-frequency seismic energy generation. The relevant phenomena (frictional breakdown, shear heating, effective normal-stress fluctuations, material damage, etc.) controlling rupture are strongly interacting and span many orders of magnitude in spatial scale, requiring highresolution simulations that couple disparate physical processes (e.g., elastodynamics, thermal weakening, pore-fluid transport, and heat conduction). Compounding the computational challenge, we know that natural faults are not planar, but instead have roughness that can be approximated by power laws potentially leading to large, multiscale fluctuations in normal stress. The capacity to perform 3D rupture simulations that couple these processes willmore » provide guidance for constructing appropriate source models for high-frequency ground motion simulations. The improved rupture models from our multi-scale dynamic rupture simulations will be used to conduct physicsbased (3D waveform modeling-based) probabilistic seismic hazard analysis (PSHA) for California. These calculation will provide numerous important seismic hazard results, including a state-wide extended earthquake rupture forecast with rupture variations for all significant events, a synthetic seismogram catalog for thousands of scenario events and more than 5000 physics-based seismic hazard curves for California.« less

  18. The Postoperative Pain Assessment Skills pilot trial.

    PubMed

    McGillion, Michael; Dubrowski, Adam; Stremler, Robyn; Watt-Watson, Judy; Campbell, Fiona; McCartney, Colin; Victor, Charles; Wiseman, Jeffrey; Snell, Linda; Costello, Judy; Robb, Anja; Nelson, Sioban; Stinson, Jennifer; Hunter, Judith; Dao, Thuan; Promislow, Sara; McNaughton, Nancy; White, Scott; Shobbrook, Cindy; Jeffs, Lianne; Mauch, Kianda; Leegaard, Marit; Beattie, W Scott; Schreiber, Martin; Silver, Ivan

    2011-01-01

    BACKGROUND⁄ Pain-related misbeliefs among health care professionals (HCPs) are common and contribute to ineffective postoperative pain assessment. While standardized patients (SPs) have been effectively used to improve HCPs' assessment skills, not all centres have SP programs. The present equivalence randomized controlled pilot trial examined the efficacy of an alternative simulation method - deteriorating patient-based simulation (DPS) - versus SPs for improving HCPs' pain knowledge and assessment skills. Seventy-two HCPs were randomly assigned to a 3 h SP or DPS simulation intervention. Measures were recorded at baseline, immediate postintervention and two months postintervention. The primary outcome was HCPs' pain assessment performance as measured by the postoperative Pain Assessment Skills Tool (PAST). Secondary outcomes included HCPs knowledge of pain-related misbeliefs, and perceived satisfaction and quality of the simulation. These outcomes were measured by the Pain Beliefs Scale (PBS), the Satisfaction with Simulated Learning Scale (SSLS) and the Simulation Design Scale (SDS), respectively. Student's t tests were used to test for overall group differences in postintervention PAST, SSLS and SDS scores. One-way analysis of covariance tested for overall group differences in PBS scores. DPS and SP groups did not differ on post-test PAST, SSLS or SDS scores. Knowledge of pain-related misbeliefs was also similar between groups. These pilot data suggest that DPS is an effective simulation alternative for HCPs' education on postoperative pain assessment, with improvements in performance and knowledge comparable with SP-based simulation. An equivalence trial to examine the effectiveness of deteriorating patient-based simulation versus standardized patients is warranted.

  19. Scaling properties of Arctic sea ice deformation in high-resolution viscous-plastic sea ice models and satellite observations

    NASA Astrophysics Data System (ADS)

    Hutter, Nils; Losch, Martin; Menemenlis, Dimitris

    2017-04-01

    Sea ice models with the traditional viscous-plastic (VP) rheology and very high grid resolution can resolve leads and deformation rates that are localised along Linear Kinematic Features (LKF). In a 1-km pan-Arctic sea ice-ocean simulation, the small scale sea-ice deformations in the Central Arctic are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS). A new coupled scaling analysis for data on Eulerian grids determines the spatial and the temporal scaling as well as the coupling between temporal and spatial scales. The spatial scaling of the modelled sea ice deformation implies multi-fractality. The spatial scaling is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling and its coupling to temporal scales with satellite observations and models with the modern elasto-brittle rheology challenges previous results with VP models at coarse resolution where no such scaling was found. The temporal scaling analysis, however, shows that the VP model does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.

  20. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From thesemore » five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  1. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  2. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE PAGES

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...

    2017-06-07

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  3. DIF Analysis with Multilevel Data: A Simulation Study Using the Latent Variable Approach

    ERIC Educational Resources Information Center

    Jin, Ying; Eason, Hershel

    2016-01-01

    The effects of mean ability difference (MAD) and short tests on the performance of various DIF methods have been studied extensively in previous simulation studies. Their effects, however, have not been studied under multilevel data structure. MAD was frequently observed in large-scale cross-country comparison studies where the primary sampling…

  4. Next-generation simulation and optimization platform for forest management and analysis

    Treesearch

    Antti Makinen; Jouni Kalliovirta; Jussi Rasinmaki

    2009-01-01

    Late developments in the objectives and the data collection methods of forestry create new challenges and possibilities in forest management planning. Tools in forest management and forest planning systems must be able to make good use of novel data sources, use new models, and solve complex forest planning tasks at different scales. The SIMulation and Optimization (...

  5. A Globalization Simulation to Teach Corporate Social Responsibility: Design Features and Analysis of Student Reasoning

    ERIC Educational Resources Information Center

    Bos, Nathan D.; Shami, N. Sadat; Naab, Sara

    2006-01-01

    There is an increasing need for business students to be taught the ability to think through ethical dilemmas faced by corporations conducting business on a global scale. This article describes a multiplayer online simulation game, ISLAND TELECOM, that exposes students to ethical dilemmas in international business. Through role playing and…

  6. Simulating Astrophysical Jets with Inertial Confinement Fusion Machines

    NASA Astrophysics Data System (ADS)

    Blue, Brent

    2005-10-01

    Large-scale directional outflows of supersonic plasma, also known as `jets', are ubiquitous phenomena in astrophysics. The traditional approach to understanding such phenomena is through theoretical analysis and numerical simulations. However, theoretical analysis might not capture all the relevant physics and numerical simulations have limited resolution and fail to scale correctly in Reynolds number and perhaps other key dimensionless parameters. Recent advances in high energy density physics using large inertial confinement fusion devices now allow controlled laboratory experiments on macroscopic volumes of plasma of direct relevance to astrophysics. This talk will present an overview of these facilities as well as results from current laboratory astrophysics experiments designed to study hydrodynamic jets and Rayleigh-Taylor mixing. This work is performed under the auspices of the U. S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48, Los Alamos National Laboratory under Contract No. W-7405-ENG-36, and the Laboratory for Laser Energetics under Contract No. DE-FC03-92SF19460.

  7. Event Detection and Sub-state Discovery from Bio-molecular Simulations Using Higher-Order Statistics: Application To Enzyme Adenylate Kinase

    PubMed Central

    Ramanathan, Arvind; Savol, Andrej J.; Agarwal, Pratul K.; Chennubhotla, Chakra S.

    2012-01-01

    Biomolecular simulations at milli-second and longer timescales can provide vital insights into functional mechanisms. Since post-simulation analyses of such large trajectory data-sets can be a limiting factor in obtaining biological insights, there is an emerging need to identify key dynamical events and relating these events to the biological function online, that is, as simulations are progressing. Recently, we have introduced a novel computational technique, quasi-anharmonic analysis (QAA) (PLoS One 6(1): e15827), for partitioning the conformational landscape into a hierarchy of functionally relevant sub-states. The unique capabilities of QAA are enabled by exploiting anharmonicity in the form of fourth-order statistics for characterizing atomic fluctuations. In this paper, we extend QAA for analyzing long time-scale simulations online. In particular, we present HOST4MD - a higher-order statistical toolbox for molecular dynamics simulations, which (1) identifies key dynamical events as simulations are in progress, (2) explores potential sub-states and (3) identifies conformational transitions that enable the protein to access those sub-states. We demonstrate HOST4MD on micro-second time-scale simulations of the enzyme adenylate kinase in its apo state. HOST4MD identifies several conformational events in these simulations, revealing how the intrinsic coupling between the three sub-domains (LID, CORE and NMP) changes during the simulations. Further, it also identifies an inherent asymmetry in the opening/closing of the two binding sites. We anticipate HOST4MD will provide a powerful and extensible framework for detecting biophysically relevant conformational coordinates from long time-scale simulations. PMID:22733562

  8. Static analysis techniques for semiautomatic synthesis of message passing software skeletons

    DOE PAGES

    Sottile, Matthew; Dagit, Jason; Zhang, Deli; ...

    2015-06-29

    The design of high-performance computing architectures demands performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a “program skeleton” that we discuss in this article is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed formore » the purposes of the skeleton. In this work, we develop a semiautomatic approach for extracting program skeletons based on compiler program analysis. Finally, we demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator.« less

  9. CFD simulations of power coefficients for an innovative Darrieus style vertical axis wind turbine with auxiliary straight blades

    NASA Astrophysics Data System (ADS)

    Arpino, F.; Cortellessa, G.; Dell'Isola, M.; Scungio, M.; Focanti, V.; Profili, M.; Rotondi, M.

    2017-11-01

    The increasing price of fossil derivatives, global warming and energy market instabilities, have led to an increasing interest in renewable energy sources such as wind energy. Amongst the different typologies of wind generators, small scale Vertical Axis Wind Turbines (VAWT) present the greatest potential for off grid power generation at low wind speeds. In the present work, Computational Fluid Dynamic (CFD) simulations were performed in order to investigate the performance of an innovative configuration of straight-blades Darrieus-style vertical axis micro wind turbine, specifically developed for small scale energy conversion at low wind speeds. The micro turbine under investigation is composed of three pairs of airfoils, consisting of a main and auxiliary blades with different chord lengths. The simulations were made using the open source finite volume based CFD toolbox OpenFOAM, considering different turbulence models and adopting a moving mesh approach for the turbine rotor. The simulated data were reported in terms of dimensionless power coefficients for dynamic performance analysis. The results from the simulations were compared to the data obtained from experiments on a scaled model of the same VAWT configuration, conducted in a closed circuit open chamber wind tunnel facility available at the Laboratory of Industrial Measurements (LaMI) of the University of Cassino and Lazio Meridionale (UNICLAM). From the proposed analysis, it was observed that the most suitable model for the simulation of the performances of the micro turbine under investigation is the one-equation Spalart-Allmaras, even if under the conditions analysed in the present work and for TSR values higher than 1.1, some discrepancies between numerical and experimental data can be observed.

  10. Simulation of IR and Raman spectra of p-hydroxyanisole and p-nitroanisole based on scaled DFT force fields and their vibrational assignments.

    PubMed

    Krishnakumar, V; Prabavathi, N

    2009-09-15

    This work deals with the vibrational spectroscopy of p-hydroxyanisole (PHA) and p-nitroanisole (PNA) by means of quantum chemical calculations. The mid and far FT-IR and FT-Raman spectra were recorded in the condensed state. The fundamental vibrational frequencies and intensity of vibrational bands were evaluated using density functional theory (DFT) with the standard B3LYP/6-31G* method and basis set combination and were scaled using various scale factors which yield a good agreement between observed and calculated frequencies. The vibrational spectra were interpreted with the aid of normal coordinate analysis based on scaled density functional force field. The results of the calculations were applied to simulate infrared and Raman spectra of the title compounds, which showed excellent agreement with the observed spectra.

  11. Evaluation and error apportionment of an ensemble of ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact

  12. One Hundred Ways to be Non-Fickian - A Rigorous Multi-Variate Statistical Analysis of Pore-Scale Transport

    NASA Astrophysics Data System (ADS)

    Most, Sebastian; Nowak, Wolfgang; Bijeljic, Branko

    2015-04-01

    Fickian transport in groundwater flow is the exception rather than the rule. Transport in porous media is frequently simulated via particle methods (i.e. particle tracking random walk (PTRW) or continuous time random walk (CTRW)). These methods formulate transport as a stochastic process of particle position increments. At the pore scale, geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Hence, it is important to get a better understanding of the processes at pore scale. For our analysis we track the positions of 10.000 particles migrating through the pore space over time. The data we use come from micro CT scans of a homogeneous sandstone and encompass about 10 grain sizes. Based on those images we discretize the pore structure and simulate flow at the pore scale based on the Navier-Stokes equation. This flow field realistically describes flow inside the pore space and we do not need to add artificial dispersion during the transport simulation. Next, we use particle tracking random walk and simulate pore-scale transport. Finally, we use the obtained particle trajectories to do a multivariate statistical analysis of the particle motion at the pore scale. Our analysis is based on copulas. Every multivariate joint distribution is a combination of its univariate marginal distributions. The copula represents the dependence structure of those univariate marginals and is therefore useful to observe correlation and non-Gaussian interactions (i.e. non-Fickian transport). The first goal of this analysis is to better understand the validity regions of commonly made assumptions. We are investigating three different transport distances: 1) The distance where the statistical dependence between particle increments can be modelled as an order-one Markov process. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks start. 2) The distance where bivariate statistical dependence simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW/CTRW). 3) The distance of complete statistical independence (validity of classical PTRW/CTRW). The second objective is to reveal characteristic dependencies influencing transport the most. Those dependencies can be very complex. Copulas are highly capable of representing linear dependence as well as non-linear dependence. With that tool we are able to detect persistent characteristics dominating transport even across different scales. The results derived from our experimental data set suggest that there are many more non-Fickian aspects of pore-scale transport than the univariate statistics of longitudinal displacements. Non-Fickianity can also be found in transverse displacements, and in the relations between increments at different time steps. Also, the found dependence is non-linear (i.e. beyond simple correlation) and persists over long distances. Thus, our results strongly support the further refinement of techniques like correlated PTRW or correlated CTRW towards non-linear statistical relations.

  13. A numerical investigation of the scale-up effects on flow, heat transfer, and kinetics processes of FCC units.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, S. L.

    1998-08-25

    Fluid Catalytic Cracking (FCC) technology is the most important process used by the refinery industry to convert crude oil to valuable lighter products such as gasoline. Process development is generally very time consuming especially when a small pilot unit is being scaled-up to a large commercial unit because of the lack of information to aide in the design of scaled-up units. Such information can now be obtained by analysis based on the pilot scale measurements and computer simulation that includes controlling physics of the FCC system. A Computational fluid dynamic (CFD) code, ICRKFLO, has been developed at Argonne National Laboratorymore » (ANL) and has been successfully applied to the simulation of catalytic petroleum cracking risers. It employs hybrid hydrodynamic-chemical kinetic coupling techniques, enabling the analysis of an FCC unit with complex chemical reaction sets containing tens or hundreds of subspecies. The code has been continuously validated based on pilot-scale experimental data. It is now being used to investigate the effects of scaled-up FCC units. Among FCC operating conditions, the feed injection conditions are found to have a strong impact on the product yields of scaled-up FCC units. The feed injection conditions appear to affect flow and heat transfer patterns and the interaction of hydrodynamics and cracking kinetics causes the product yields to change accordingly.« less

  14. Analysis of the Effect of Interior Nudging on Temperature and Precipitation Distributions of Multi-year Regional Climate Simulations

    NASA Astrophysics Data System (ADS)

    Nolte, C. G.; Otte, T. L.; Bowden, J. H.; Otte, M. J.

    2010-12-01

    There is disagreement in the regional climate modeling community as to the appropriateness of the use of internal nudging. Some investigators argue that the regional model should be minimally constrained and allowed to respond to regional-scale forcing, while others have noted that in the absence of interior nudging, significant large-scale discrepancies develop between the regional model solution and the driving coarse-scale fields. These discrepancies lead to reduced confidence in the ability of regional climate models to dynamically downscale global climate model simulations under climate change scenarios, and detract from the usability of the regional simulations for impact assessments. The advantages and limitations of interior nudging schemes for regional climate modeling are investigated in this study. Multi-year simulations using the WRF model driven by reanalysis data over the continental United States at 36km resolution are conducted using spectral nudging, grid point nudging, and for a base case without interior nudging. The means, distributions, and inter-annual variability of temperature and precipitation will be evaluated in comparison to regional analyses.

  15. Large-eddy simulations of a forced homogeneous isotropic turbulence with polymer additives

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Cai, Wei-Hua; Li, Feng-Chen

    2014-03-01

    Large-eddy simulations (LES) based on the temporal approximate deconvolution model were performed for a forced homogeneous isotropic turbulence (FHIT) with polymer additives at moderate Taylor Reynolds number. Finitely extensible nonlinear elastic in the Peterlin approximation model was adopted as the constitutive equation for the filtered conformation tensor of the polymer molecules. The LES results were verified through comparisons with the direct numerical simulation results. Using the LES database of the FHIT in the Newtonian fluid and the polymer solution flows, the polymer effects on some important parameters such as strain, vorticity, drag reduction, and so forth were studied. By extracting the vortex structures and exploring the flatness factor through a high-order correlation function of velocity derivative and wavelet analysis, it can be found that the small-scale vortex structures and small-scale intermittency in the FHIT are all inhibited due to the existence of the polymers. The extended self-similarity scaling law in the polymer solution flow shows no apparent difference from that in the Newtonian fluid flow at the currently simulated ranges of Reynolds and Weissenberg numbers.

  16. Interacting scales and energy transfer in isotropic turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE

    1993-01-01

    The dependence of the energy transfer process on the disparity of the interacting scales is investigated in the inertial and far-dissipation ranges of isotropic turbulence. The strategy for generating the simulated flow fields and the choice of a disparity parameter to characterize the scaling of the interactions is discussed. The inertial range is found to be dominated by relatively local interactions, in agreement with the Kolmogorov assumption. The far-dissipation is found to be dominated by relatively non-local interactions, supporting the classical notion that the far-dissipation range is slaved to the Kolmogorov scales. The measured energy transfer is compared with the classical models of Heisenberg, Obukhov, and the more detailed analysis of Tennekes and Lumley. The energy transfer statistics measured in the numerically simulated flows are found to be nearly self-similar for wave numbers in the inertial range. Using the self-similar form measured within the limited scale range of the simulation, an 'ideal' energy transfer function and the corresponding energy flux rate for an inertial range of infinite extent are constructed. From this flux rate, the Kolmogorov constant is calculated to be 1.5, in excellent agreement with experiments.

  17. Validity of flowmeter data in heterogeneous alluvial aquifers

    NASA Astrophysics Data System (ADS)

    Bianchi, Marco

    2017-04-01

    Numerical simulations are performed to evaluate the impact of medium-scale sedimentary architecture and small-scale heterogeneity on the validity of the borehole flowmeter test, a widely used method for measuring hydraulic conductivity (K) at the scale required for detailed groundwater flow and solute transport simulations. Reference data from synthetic K fields representing the range of structures and small-scale heterogeneity typically observed in alluvial systems are compared with estimated values from numerical simulations of flowmeter tests. Systematic errors inherent in the flowmeter K estimates are significant when the reference K field structure deviates from the hypothetical perfectly stratified conceptual model at the basis of the interpretation method of flowmeter tests. Because of these errors, the true variability of the K field is underestimated and the distributions of the reference K data and log-transformed spatial increments are also misconstrued. The presented numerical analysis shows that the validity of flowmeter based K data depends on measureable parameters defining the architecture of the hydrofacies, the conductivity contrasts between the hydrofacies and the sub-facies-scale K variability. A preliminary geological characterization is therefore essential for evaluating the optimal approach for accurate K field characterization.

  18. Study on model design and dynamic similitude relations of vibro-acoustic experiment for elastic cavity

    NASA Astrophysics Data System (ADS)

    Shi, Ao; Lu, Bo; Yang, Dangguo; Wang, Xiansheng; Wu, Junqiang; Zhou, Fangqi

    2018-05-01

    Coupling between aero-acoustic noise and structural vibration under high-speed open cavity flow-induced oscillation may bring about severe random vibration of the structure, and even cause structure to fatigue destruction, which threatens the flight safety. Carrying out the research on vibro-acoustic experiments of scaled down model is an effective means to clarify the effects of high-intensity noise of cavity on structural vibration. Therefore, in allusion to the vibro-acoustic experiments of cavity in wind tunnel, taking typical elastic cavity as the research object, dimensional analysis and finite element method were adopted to establish the similitude relations of structural inherent characteristics and dynamics for distorted model, and verifying the proposed similitude relations by means of experiments and numerical simulation. Research shows that, according to the analysis of scale-down model, the established similitude relations can accurately simulate the structural dynamic characteristics of actual model, which provides theoretic guidance for structural design and vibro-acoustic experiments of scaled down elastic cavity model.

  19. Pressurized thermal shock: TEMPEST computer code simulation of thermal mixing in the cold leg and downcomer of a pressurized water reactor. [Creare 61 and 64

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eyler, L.L.; Trent, D.S.

    The TEMPEST computer program was used to simulate fluid and thermal mixing in the cold leg and downcomer of a pressurized water reactor under emergency core cooling high-pressure injection (HPI), which is of concern to the pressurized thermal shock (PTS) problem. Application of the code was made in performing an analysis simulation of a full-scale Westinghouse three-loop plant design cold leg and downcomer. Verification/assessment of the code was performed and analysis procedures developed using data from Creare 1/5-scale experimental tests. Results of three simulations are presented. The first is a no-loop-flow case with high-velocity, low-negative-buoyancy HPI in a 1/5-scale modelmore » of a cold leg and downcomer. The second is a no-loop-flow case with low-velocity, high-negative density (modeled with salt water) injection in a 1/5-scale model. Comparison of TEMPEST code predictions with experimental data for these two cases show good agreement. The third simulation is a three-dimensional model of one loop of a full size Westinghouse three-loop plant design. Included in this latter simulation are loop components extending from the steam generator to the reactor vessel and a one-third sector of the vessel downcomer and lower plenum. No data were available for this case. For the Westinghouse plant simulation, thermally coupled conduction heat transfer in structural materials is included. The cold leg pipe and fluid mixing volumes of the primary pump, the stillwell, and the riser to the steam generator are included in the model. In the reactor vessel, the thermal shield, pressure vessel cladding, and pressure vessel wall are thermally coupled to the fluid and thermal mixing in the downcomer. The inlet plenum mixing volume is included in the model. A 10-min (real time) transient beginning at the initiation of HPI is computed to determine temperatures at the beltline of the pressure vessel wall.« less

  20. Numerical Simulation of Monitoring Corrosion in Reinforced Concrete Based on Ultrasonic Guided Waves

    PubMed Central

    Zheng, Zhupeng; Lei, Ying; Xue, Xin

    2014-01-01

    Numerical simulation based on finite element method is conducted to predict the location of pitting corrosion in reinforced concrete. Simulation results show that it is feasible to predict corrosion monitoring based on ultrasonic guided wave in reinforced concrete, and wavelet analysis can be used for the extremely weak signal of guided waves due to energy leaking into concrete. The characteristic of time-frequency localization of wavelet transform is adopted in the corrosion monitoring of reinforced concrete. Guided waves can be successfully used to identify corrosion defects in reinforced concrete with the analysis of suitable wavelet-based function and its scale. PMID:25013865

  1. Damaris: Addressing performance variability in data management for post-petascale simulations

    DOE PAGES

    Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck; ...

    2016-10-01

    With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters.more » Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.« less

  2. Damaris: Addressing performance variability in data management for post-petascale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck

    With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters.more » Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.« less

  3. Modeling and Control of State-Affine Probabilistic Systems for Atomic-Scale Dynamics

    DTIC Science & Technology

    2007-06-01

    Analysis for Nonlinear Systems. New York: Spring -Verlag, 1987. [5] M. Itoh, "Atomic-scale homoepitaxial growth simulations of reconstructed III-V surfaces...Ridge, NJ; 1995. -- SSC function of the Node 182 [2] M. K. Weldon , K. T. Queeney, J. Eng Jr., K. Raghavachari, and Y. J. Chabal, "The surface science

  4. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  5. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  6. The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation

    NASA Astrophysics Data System (ADS)

    Noh, Yookyung

    The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.

  7. Advances and issues from the simulation of planetary magnetospheres with recent supercomputer systems

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2016-12-01

    Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we will show the latest simulation results using the petascale supercomputer and problems from the use of these supercomputer systems.

  8. Lab and Pore-Scale Study of Low Permeable Soils Diffusional Tortuosity

    NASA Astrophysics Data System (ADS)

    Lekhov, V.; Pozdniakov, S. P.; Denisova, L.

    2016-12-01

    Diffusion plays important role in contaminant spreading in low permeable units. The effective diffusion coefficient of saturated porous medium depends on this coefficient in water, porosity and structural parameter of porous space - tortuosity. Theoretical models of relationship between porosity and diffusional tortuosity are usually derived for conceptual granular models of medium filled by solid particles of simple geometry. These models usually do not represent soils with complex microstructure. The empirical models, like as Archie's law, based on the experimental electrical conductivity data are mostly useful for practical applications. Such models contain empirical parameters that should be defined experimentally for given soil type. In this work, we compared tortuosity values obtained in lab-scale diffusional experiments and pore scale diffusion simulation for the studied soil microstructure and exanimated relationship between tortuosity and porosity. Samples for the study were taken from borehole cores of low-permeable silt-clay formation. Using the samples of 50 cm3 we performed lab scale diffusional experiments and estimated the lab-scale tortuosity. Next using these samples we studied the microstructure with X-ray microtomograph. Shooting performed on undisturbed microsamples of size 1,53 mm with a resolution ×300 (10243 vox). After binarization of each obtained 3-D structure, its spatial correlation analysis was performed. This analysis showed that the spatial correlation scale of the indicator variogram is considerably smaller than microsample length. Then there was the numerical simulation of the Laplace equation with binary coefficients for each microsamples. The total number of simulations at the finite-difference grid of 1753 cells was 3500. As a result the effective diffusion coefficient, tortuosity and porosity values were obtained for all studied microsamples. The results were analyzed in the form of graph of tortuosity versus porosity. The 6 experimental tortuosity values well agree with pore-scale simulations falling in the general pattern that shows nonlinear decreasing of tortuosity with decreasing of porosity. Fitting this graph by Archie model we found exponent value in the range between 1,8 and 2,4. This work was supported by RFBR via grant 14-05-00409.

  9. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  10. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  11. A new climate modeling framework for convection-resolving simulation at continental scale

    NASA Astrophysics Data System (ADS)

    Charpilloz, Christophe; di Girolamo, Salvatore; Arteaga, Andrea; Fuhrer, Oliver; Hoefler, Torsten; Schulthess, Thomas; Schär, Christoph

    2017-04-01

    Major uncertainties remain in our understanding of the processes that govern the water cycle in a changing climate and their representation in weather and climate models. Of particular concern are heavy precipitation events of convective origin (thunderstorms and rain showers). The aim of the crCLIM project [1] is to propose a new climate modeling framework that alleviates the I/O-bottleneck in large-scale, convection-resolving climate simulations and thus to enable new analysis techniques for climate scientists. Due to the large computational costs, convection-resolving simulations are currently restricted to small computational domains or very short time scales, unless the largest available supercomputers system such as hybrid CPU-GPU architectures are used [3]. Hence, the COSMO model has been adapted to run on these architectures for research and production purposes [2]. However, the amount of generated data also increases and storing this data becomes infeasible making the analysis of simulations results impractical. To circumvent this problem and enable high-resolution models in climate we propose a data-virtualization layer (DVL) that re-runs simulations on demand and transparently manages the data for the analysis, that means we trade off computational effort (time) for storage (space). This approach also requires a bit-reproducible version of the COSMO model that produces identical results on different architectures (CPUs and GPUs) [4] that will be coupled with a performance model in order enable optimal re-runs depending on requirements of the re-run and available resources. In this contribution, we discuss the strategy to develop the DVL, a first performance model, the challenge of bit-reproducibility and the first results of the crCLIM project. [1] http://www.c2sm.ethz.ch/research/crCLIM.html [2] O. Fuhrer, C. Osuna, X. Lapillonne, T. Gysi, M. Bianco, and T. Schulthess. "Towards gpu-accelerated operational weather forecasting." In The GPU Technology Conference, GTC. 2013. [3] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, and C. Schär. "Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19." Geoscientific Model Development 9, no. 9 (2016): 3393. [4] A. Arteaga, O. Fuhrer, and T. Hoefler. "Designing bit-reproducible portable high-performance applications." In Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, pp. 1235-1244. IEEE, 2014.

  12. Application of lab derived kinetic biodegradation parameters at the field scale

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.

    2003-04-01

    Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.

  13. Simulation and flavor compound analysis of dealcoholized beer via one-step vacuum distillation.

    PubMed

    Andrés-Iglesias, Cristina; García-Serna, Juan; Montero, Olimpio; Blanco, Carlos A

    2015-10-01

    The coupled operation of vacuum distillation process to produce alcohol free beer at laboratory scale and Aspen HYSYS simulation software was studied to define the chemical changes during the dealcoholization process in the aroma profiles of 2 different lager beers. At the lab-scale process, 2 different parameters were chosen to dealcoholize beer samples, 102mbar at 50°C and 200mbar at 67°C. Samples taken at different steps of the process were analyzed by HS-SPME-GC-MS focusing on the concentration of 7 flavor compounds, 5 alcohols and 2 esters. For simulation process, the EoS parameters of the Wilson-2 property package were adjusted to the experimental data and one more pressure was tested (60mbar). Simulation methods represent a viable alternative to predict results of the volatile compound composition of a final dealcoholized beer. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Multiscale Analysis of Rapidly Rotating Dynamo Simulations

    NASA Astrophysics Data System (ADS)

    Orvedahl, R.; Calkins, M. A.; Featherstone, N. A.

    2017-12-01

    The magnetic field of the planets and stars are generated by dynamo action in their electrically conducting fluid interiors. Numerical models of this process solve the fundamental equations of magnetohydrodynamics driven by convection in a rotating spherical shell. Rotation plays an important role in modifying the resulting convective flows and the self-generated magnetic field. We present results of simulating rapidly rotating systems that are unstable to dynamo action. We use the pseudo-spectral code Rayleigh to generate a suite of direct numerical simulations. Each simulation uses the Boussinesq approximation and is characterized by an Ekman number (Ek=ν /Ω L2) of 10-5. We vary the degree of convective forcing to obtain a range of convective Rossby numbers. The resulting flows and magnetic structures are analyzed using a Reynolds decomposition. We determine the relative importance of each term in the scale-separated governing equations and estimate the relevant spatial scales responsible for generating the mean magnetic field.

  15. Progress on the Development of the hPIC Particle-in-Cell Code

    NASA Astrophysics Data System (ADS)

    Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team

    2017-10-01

    Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.

  16. Space-based Doppler lidar sampling strategies: Algorithm development and simulated observation experiments

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Wood, S. A.; Morris, M.

    1990-01-01

    Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.

  17. Uncertainty analysis of atmospheric deposition simulation of radiocesium and radioiodine from Fukushima Daiichi Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Morino, Yu; Ohara, Toshimasa; Yumimoto, Keiya

    2014-05-01

    Chemical transport models (CTM) played key roles in understanding the atmospheric behaviors and deposition patterns of radioactive materials emitted from the Fukushima Daiichi nuclear power plant (FDNPP) after the nuclear accident that accompanied the great Tohoku earthquake and tsunami on 11 March 2011. In this study, we assessed uncertainties of atmospheric simulation by comparing observed and simulated deposition of radiocesium (137Cs) and radioiodine (131I). Airborne monitoring survey data were used to assess the model performance of 137Cs deposition patterns. We found that simulation using emissions estimated with a regional-scale (~500 km) CTM better reproduced the observed 137Cs deposition pattern in eastern Japan than simulation using emissions estimated with local-scale (~50 km) or global-scale CTM. In addition, we estimated the emission amount of 137Cs from FDNPP by combining a CTM, a priori source term, and observed deposition data. This is the first use of airborne survey data of 137Cs deposition (more than 16,000 data points) as the observational constraints in inverse modeling. The model simulation driven by a posteriori source term achieved better agreements with 137Cs depositions measured by aircraft survey and at in-situ stations over eastern Japan. Wet deposition module was also evaluated. Simulation using a process-based wet deposition module reproduced the observations well, whereas simulation using scavenging coefficients showed large uncertainties associated with empirical parameters. The best-available simulation reproduced the observed 137Cs deposition rates in high-deposition areas (≥10 kBq m-2) within one order of magnitude. Recently, 131I deposition map was released and helped to evaluate model performance of 131I deposition patterns. Observed 131I/137Cs deposition ratio is higher in areas southwest of FDNPP than northwest of FDNPP, and this behavior was roughly reproduced by a CTM if we assume that released 131I is more in gas phase than particles. Analysis of 131I deposition gives us better constraint for the atmospheric simulation of 131I, which is important in assessing public radiation exposure.

  18. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  19. A High-Resolution WRF Tropical Channel Simulation Driven by a Global Reanalysis

    NASA Astrophysics Data System (ADS)

    Holland, G.; Leung, L.; Kuo, Y.; Hurrell, J.

    2006-12-01

    Since 2003, NCAR has invested in the development and application of Nested Regional Climate Model (NRCM) based on the Weather Research and Forecasting (WRF) model and the Community Climate System Model, as a key component of the Prediction Across Scales Initiative. A prototype tropical channel model has been developed to investigate scale interactions and the influence of tropical convection on large scale circulation and tropical modes. The model was developed based on the NCAR Weather Research and Forecasting Model (WRF), configured as a tropical channel between 30 ° S and 45 ° N, wide enough to allow teleconnection effects over the mid-latitudes. Compared to the limited area domain that WRF is typically applied over, the channel mode alleviates issues with reflection of tropical modes that could result from imposing east/west boundaries. Using a large amount of available computing resources on a supercomputer (Blue Vista) during its bedding in period, a simulation has been completed with the tropical channel applied at 36 km horizontal resolution for 5 years from 1996 to 2000, with large scale circulation provided by the NCEP/NCAR global reanalysis at the north/south boundaries. Shorter simulations of 2 years and 6 months have also been performed to include two-way nests at 12 km and 4 km resolution, respectively, over the western Pacific warm pool, to explicitly resolve tropical convection in the Maritime Continent. The simulations realistically captured the large-scale circulation including the trade winds over the tropical Pacific and Atlantic, the Australian and Asian monsoon circulation, and hurricane statistics. Preliminary analysis and evaluation of the simulations will be presented.

  20. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    NASA Astrophysics Data System (ADS)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  1. Fully-Integrated Simulation of Conjunctive Use from Field to Basin Scales: Development of a Surface Water Operations Module for MODFLOW-OWHM

    NASA Astrophysics Data System (ADS)

    Ferguson, I. M.; Boyce, S. E.; Hanson, R. T.; Llewellyn, D.

    2014-12-01

    It is well established that groundwater pumping affects surface-water availability by intercepting groundwater that would otherwise discharge to streams and/or by increasing seepage from surface-water channels. Conversely, surface-water management operations effect groundwater availability by altering the timing, location, and quantity of groundwater recharge and demand. Successful conjunctive use may require analysis with an integrated approach that accounts for the many interactions and feedbacks between surface-water and groundwater availability and their joint management. In order to improve simulation and analysis of conjunctive use, Bureau of Reclamation and USGS are collaborating to develop a surface-water operations module within MODFLOW One Water Hydrologic Flow Model (MF-OWHM), a new version of the USGS Modular Groundwater Flow Model (MODFLOW). Here we describe the development and application of the surface-water operations module. We provide an overview of the conceptual approach used to simulate surface-water operations—including surface-water storage, allocation, release, diversion, and delivery on monthly to seasonal time frames—in a fully-integrated manner. We then present results from a recent case study analysis of the Rio Grande Project, a large-scale irrigation project located in New Mexico and Texas, under varying surface-water operations criteria and climate conditions. Case study results demonstrate the importance of integrated hydrologic simulation of surface water and groundwater operations in analysis and management of conjunctive-use systems.

  2. Experiments and Dynamic Finite Element Analysis of a Wire-Rope Rockfall Protective Fence

    NASA Astrophysics Data System (ADS)

    Tran, Phuc Van; Maegawa, Koji; Fukada, Saiji

    2013-09-01

    The imperative need to protect structures in mountainous areas against rockfall has led to the development of various protection methods. This study introduces a new type of rockfall protection fence made of posts, wire ropes, wire netting and energy absorbers. The performance of this rock fence was verified in both experiments and dynamic finite element analysis. In collision tests, a reinforced-concrete block rolled down a natural slope and struck the rock fence at the end of the slope. A specialized system of measuring instruments was employed to accurately measure the acceleration of the block without cable connection. In particular, the performance of two energy absorbers, which contribute also to preventing wire ropes from breaking, was investigated to determine the best energy absorber. In numerical simulation, a commercial finite element code having explicit dynamic capabilities was employed to create models of the two full-scale tests. To facilitate simulation, certain simplifying assumptions for mechanical data of each individual component of the rock fence and geometrical data of the model were adopted. Good agreement between numerical simulation and experimental data validated the numerical simulation. Furthermore, the results of numerical simulation helped highlight limitations of the testing method. The results of numerical simulation thus provide a deeper understanding of the structural behavior of individual components of the rock fence during rockfall impact. More importantly, numerical simulations can be used not only as supplements to or substitutes for full-scale tests but also in parametric study and design.

  3. Rapid Harmonic Analysis of Piezoelectric MEMS Resonators.

    PubMed

    Puder, Jonathan M; Pulskamp, Jeffrey S; Rudy, Ryan Q; Cassella, Cristian; Rinaldi, Matteo; Chen, Guofeng; Bhave, Sunil A; Polcawich, Ronald G

    2018-06-01

    This paper reports on a novel simulation method combining the speed of analytical evaluation with the accuracy of finite-element analysis (FEA). This method is known as the rapid analytical-FEA technique (RAFT). The ability of the RAFT to accurately predict frequency response orders of magnitude faster than conventional simulation methods while providing deeper insights into device design not possible with other types of analysis is detailed. Simulation results from the RAFT across wide bandwidths are compared to measured results of resonators fabricated with various materials, frequencies, and topologies with good agreement. These include resonators targeting beam extension, disk flexure, and Lamé beam modes. An example scaling analysis is presented and other applications enabled are discussed as well. The supplemental material includes example code for implementation in ANSYS, although any commonly employed FEA package may be used.

  4. Use of a PhET Interactive Simulation in General Chemistry Laboratory: Models of the Hydrogen Atom

    ERIC Educational Resources Information Center

    Clark, Ted M.; Chamberlain, Julia M.

    2014-01-01

    An activity supporting the PhET interactive simulation, Models of the Hydrogen Atom, has been designed and used in the laboratory portion of a general chemistry course. This article describes the framework used to successfully accomplish implementation on a large scale. The activity guides students through a comparison and analysis of the six…

  5. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  6. Multi-scale sensitivity analysis of pile installation using DEM

    NASA Astrophysics Data System (ADS)

    Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni

    2017-12-01

    The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.

  7. Static Analysis of Large-Scale Multibody System Using Joint Coordinates and Spatial Algebra Operator

    PubMed Central

    Omar, Mohamed A.

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations. PMID:25045732

  8. Static analysis of large-scale multibody system using joint coordinates and spatial algebra operator.

    PubMed

    Omar, Mohamed A

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.

  9. Multi-scale sensitivity analysis of pile installation using DEM

    NASA Astrophysics Data System (ADS)

    Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni

    2018-07-01

    The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.

  10. Forest gradient response in Sierran landscapes: the physical template

    USGS Publications Warehouse

    Urban, Dean L.; Miller, Carol; Halpin, Patrick N.; Stephenson, Nathan L.

    2000-01-01

    Vegetation pattern on landscapes is the manifestation of physical gradients, biotic response to these gradients, and disturbances. Here we focus on the physical template as it governs the distribution of mixed-conifer forests in California's Sierra Nevada. We extended a forest simulation model to examine montane environmental gradients, emphasizing factors affecting the water balance in these summer-dry landscapes. The model simulates the soil moisture regime in terms of the interaction of water supply and demand: supply depends on precipitation and water storage, while evapotranspirational demand varies with solar radiation and temperature. The forest cover itself can affect the water balance via canopy interception and evapotranspiration. We simulated Sierran forests as slope facets, defined as gridded stands of homogeneous topographic exposure, and verified simulated gradient response against sample quadrats distributed across Sequoia National Park. We then performed a modified sensitivity analysis of abiotic factors governing the physical gradient. Importantly, the model's sensitivity to temperature, precipitation, and soil depth varies considerably over the physical template, particularly relative to elevation. The physical drivers of the water balance have characteristic spatial scales that differ by orders of magnitude. Across large spatial extents, temperature and precipitation as defined by elevation primarily govern the location of the mixed conifer zone. If the analysis is constrained to elevations within the mixed-conifer zone, local topography comes into play as it influences drainage. Soil depth varies considerably at all measured scales, and is especially dominant at fine (within-stand) scales. Physical site variables can influence soil moisture deficit either by affecting water supply or water demand; these effects have qualitatively different implications for forest response. These results have clear implications about purely inferential approaches to gradient analysis, and bear strongly on our ability to use correlative approaches in assessing the potential responses of montane forests to anthropogenic climatic change.

  11. Characterizing observed circulation patterns within a bay using HF radar and numerical model simulations

    NASA Astrophysics Data System (ADS)

    O'Donncha, Fearghal; Hartnett, Michael; Nash, Stephen; Ren, Lei; Ragnoli, Emanuele

    2015-02-01

    In this study, High Frequency Radar (HFR), observations in conjunction with numerical model simulations investigate surface flow dynamics in a tidally-active, wind-driven bay; Galway Bay situated on the West coast of Ireland. Comparisons against ADCP sensor data permit an independent assessment of HFR and model performance, respectively. Results show root-mean-square (rms) differences in the range 10 - 12cm/s while model rms equalled 12 - 14cm/s. Subsequent analysis focus on a detailed comparison of HFR and model output. Harmonic analysis decompose both sets of surface currents based on distinct flow process, enabling a correlation analysis between the resultant output and dominant forcing parameters. Comparisons of barotropic model simulations and HFR tidal signal demonstrate consistently high agreement, particularly of the dominant M2 tidal signal. Analysis of residual flows demonstrate considerably poorer agreement, with the model failing to replicate complex flows. A number of hypotheses explaining this discrepancy are discussed, namely: discrepancies between regional-scale, coastal-ocean models and globally-influenced bay-scale dynamics; model uncertainties arising from highly-variable wind-driven flows across alarge body of water forced by point measurements of wind vectors; and the high dependence of model simulations on empirical wind-stress coefficients. The research demonstrates that an advanced, widely-used hydro-environmental model does not accurately reproduce aspects of surface flow processes, particularly with regards wind forcing. Considering the significance of surface boundary conditions in both coastal and open ocean dynamics, the viability of using a systematic analysis of results to improve model predictions is discussed.

  12. Impact of Spatial Scales on the Intercomparison of Climate Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei; Steptoe, Michael; Chang, Zheng

    2017-01-01

    Scenario analysis has been widely applied in climate science to understand the impact of climate change on the future human environment, but intercomparison and similarity analysis of different climate scenarios based on multiple simulation runs remain challenging. Although spatial heterogeneity plays a key role in modeling climate and human systems, little research has been performed to understand the impact of spatial variations and scales on similarity analysis of climate scenarios. To address this issue, the authors developed a geovisual analytics framework that lets users perform similarity analysis of climate scenarios from the Global Change Assessment Model (GCAM) using a hierarchicalmore » clustering approach.« less

  13. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  14. Building test data from real outbreaks for evaluating detection algorithms.

    PubMed

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  15. Building test data from real outbreaks for evaluating detection algorithms

    PubMed Central

    Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159

  16. Scaling up to address data science challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, Joanne R.

    Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less

  17. Scaling up to address data science challenges

    DOE PAGES

    Wendelberger, Joanne R.

    2017-04-27

    Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less

  18. Multiscale modeling of fluid flow and mass transport

    NASA Astrophysics Data System (ADS)

    Masuoka, K.; Yamamoto, H.; Bijeljic, B.; Lin, Q.; Blunt, M. J.

    2017-12-01

    In recent years, there are some reports on a simulation of fluid flow in pore spaces of rocks using Navier-Stokes equations. These studies mostly adopt a X-ray CT to create 3-D numerical grids of the pores in micro-scale. However, results may be of low accuracy when the rock has a large pore size distribution, because pores, whose size is smaller than resolution of the X-ray CT may be neglected. We recently found out by tracer tests in a laboratory using a brine saturated Ryukyu limestone and inject fresh water that a decrease of chloride concentration took longer time. This phenomenon can be explained due to weak connectivity of the porous networks. Therefore, it is important to simulate entire pore spaces even those of very small sizes in which diffusion is dominant. We have developed a new methodology for multi-level modeling for pore scale fluid flow in porous media. The approach is to combine pore-scale analysis with Darcy-flow analysis using two types of X-ray CT images in different resolutions. Results of the numerical simulations showed a close match with the experimental results. The proposed methodology is an enhancement for analyzing mass transport and flow phenomena in rocks with complicated pore structure.

  19. Can preferred atmospheric circulation patterns over the North-Atlantic-Eurasian region be associated with arctic sea ice loss?

    NASA Astrophysics Data System (ADS)

    Crasemann, Berit; Handorf, Dörthe; Jaiser, Ralf; Dethloff, Klaus; Nakamura, Tetsu; Ukita, Jinro; Yamazaki, Koji

    2017-12-01

    In the framework of atmospheric circulation regimes, we study whether the recent Arctic sea ice loss and Arctic Amplification are associated with changes in the frequency of occurrence of preferred atmospheric circulation patterns during the extended winter season from December to March. To determine regimes we applied a cluster analysis to sea-level pressure fields from reanalysis data and output from an atmospheric general circulation model. The specific set up of the two analyzed model simulations for low and high ice conditions allows for attributing differences between the simulations to the prescribed sea ice changes only. The reanalysis data revealed two circulation patterns that occur more frequently for low Arctic sea ice conditions: a Scandinavian blocking in December and January and a negative North Atlantic Oscillation pattern in February and March. An analysis of related patterns of synoptic-scale activity and 2 m temperatures provides a synoptic interpretation of the corresponding large-scale regimes. The regimes that occur more frequently for low sea ice conditions are resembled reasonably well by the model simulations. Based on those results we conclude that the detected changes in the frequency of occurrence of large-scale circulation patterns can be associated with changes in Arctic sea ice conditions.

  20. Continuous Easy-Plane Deconfined Phase Transition on the Kagome Lattice

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Feng; He, Yin-Chen; Eggert, Sebastian; Moessner, Roderich; Pollmann, Frank

    2018-03-01

    We use large scale quantum Monte Carlo simulations to study an extended Hubbard model of hard core bosons on the kagome lattice. In the limit of strong nearest-neighbor interactions at 1 /3 filling, the interplay between frustration and quantum fluctuations leads to a valence bond solid ground state. The system undergoes a quantum phase transition to a superfluid phase as the interaction strength is decreased. It is still under debate whether the transition is weakly first order or represents an unconventional continuous phase transition. We present a theory in terms of an easy plane noncompact C P1 gauge theory describing the phase transition at 1 /3 filling. Utilizing large scale quantum Monte Carlo simulations with parallel tempering in the canonical ensemble up to 15552 spins, we provide evidence that the phase transition is continuous at exactly 1 /3 filling. A careful finite size scaling analysis reveals an unconventional scaling behavior hinting at deconfined quantum criticality.

  1. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  2. Analysis of applied forces and electromyography of back and shoulders muscles when performing a simulated hand scaling task.

    PubMed

    Porter, William; Gallagher, Sean; Torma-Krajewski, Janet

    2010-05-01

    Hand scaling is a physically demanding task responsible for numerous overexertion injuries in underground mining. Scaling requires the miner to use a long pry bar to remove loose rock, reducing the likelihood of rock fall injuries. The experiments described in this article simulated "rib" scaling (scaling a mine wall) from an elevated bucket to examine force generation and electromyographic responses using two types of scaling bars (steel and fiberglass-reinforced aluminum) at five target heights ranging from floor level to 176 cm. Ten male and six female subjects were tested in separate experiments. Peak and average force applied at the scaling bar tip and normalized electromyography (EMG) of the left and right pairs of the deltoid and erectores spinae muscles were obtained. Work height significantly affected peak prying force during scaling activities with highest force capacity at the lower levels. Bar type did not affect force generation. However, use of the lighter fiberglass bar required significantly more muscle activity to achieve the same force. Results of these studies suggest that miners scale points on the rock face that are below their knees, and reposition the bucket as often as necessary to do so. Published by Elsevier Ltd.

  3. Swimming in Light: A Large-Scale Computational Analysis of the Metabolism of Dinoroseobacter shibae

    PubMed Central

    Rex, Rene; Bill, Nelli; Schmidt-Hohagen, Kerstin; Schomburg, Dietmar

    2013-01-01

    The Roseobacter clade is a ubiquitous group of marine α-proteobacteria. To gain insight into the versatile metabolism of this clade, we took a constraint-based approach and created a genome-scale metabolic model (iDsh827) of Dinoroseobacter shibae DFL12T. Our model is the first accounting for the energy demand of motility, the light-driven ATP generation and experimentally determined specific biomass composition. To cover a large variety of environmental conditions, as well as plasmid and single gene knock-out mutants, we simulated 391,560 different physiological states using flux balance analysis. We analyzed our results with regard to energy metabolism, validated them experimentally, and revealed a pronounced metabolic response to the availability of light. Furthermore, we introduced the energy demand of motility as an important parameter in genome-scale metabolic models. The results of our simulations also gave insight into the changing usage of the two degradation routes for dimethylsulfoniopropionate, an abundant compound in the ocean. A side product of dimethylsulfoniopropionate degradation is dimethyl sulfide, which seeds cloud formation and thus enhances the reflection of sunlight. By our exhaustive simulations, we were able to identify single-gene knock-out mutants, which show an increased production of dimethyl sulfide. In addition to the single-gene knock-out simulations we studied the effect of plasmid loss on the metabolism. Moreover, we explored the possible use of a functioning phosphofructokinase for D. shibae. PMID:24098096

  4. Spatial variability of the Black Sea surface temperature from high resolution modeling and satellite measurements

    NASA Astrophysics Data System (ADS)

    Mizyuk, Artem; Senderov, Maxim; Korotaev, Gennady

    2016-04-01

    Large number of numerical ocean models were implemented for the Black Sea basin during last two decades. They reproduce rather similar structure of synoptical variability of the circulation. Since 00-s numerical studies of the mesoscale structure are carried out using high performance computing (HPC). With the growing capacity of computing resources it is now possible to reconstruct the Black Sea currents with spatial resolution of several hundreds meters. However, how realistic these results can be? In the proposed study an attempt is made to understand which spatial scales are reproduced by ocean model in the Black Sea. Simulations are made using parallel version of NEMO (Nucleus for European Modelling of the Ocean). A two regional configurations with spatial resolutions 5 km and 2.5 km are described. Comparison of the SST from simulations with two spatial resolutions shows rather qualitative difference of the spatial structures. Results of high resolution simulation are compared also with satellite observations and observation-based products from Copernicus using spatial correlation and spectral analysis. Spatial scales of correlations functions for simulated and observed SST are rather close and differs much from satellite SST reanalysis. Evolution of spectral density for modelled SST and reanalysis showed agreed time periods of small scales intensification. Using of the spectral analysis for satellite measurements is complicated due to gaps. The research leading to this results has received funding from Russian Science Foundation (project № 15-17-20020)

  5. The mass dependence of dark matter halo alignments with large-scale structure

    NASA Astrophysics Data System (ADS)

    Piras, Davide; Joachimi, Benjamin; Schäfer, Björn Malte; Bonamigo, Mario; Hilbert, Stefan; van Uitert, Edo

    2018-02-01

    Tidal gravitational forces can modify the shape of galaxies and clusters of galaxies, thus correlating their orientation with the surrounding matter density field. We study the dependence of this phenomenon, known as intrinsic alignment (IA), on the mass of the dark matter haloes that host these bright structures, analysing the Millennium and Millennium-XXL N-body simulations. We closely follow the observational approach, measuring the halo position-halo shape alignment and subsequently dividing out the dependence on halo bias. We derive a theoretical scaling of the IA amplitude with mass in a dark matter universe, and predict a power law with slope βM in the range 1/3 to 1/2, depending on mass scale. We find that the simulation data agree with each other and with the theoretical prediction remarkably well over three orders of magnitude in mass, with the joint analysis yielding an estimate of β M = 0.36^{+0.01}_{-0.01}. This result does not depend on redshift or on the details of the halo shape measurement. The analysis is repeated on observational data, obtaining a significantly higher value, β M = 0.56^{+0.05}_{-0.05}. There are also small but significant deviations from our simple model in the simulation signals at both the high- and low-mass end. We discuss possible reasons for these discrepancies, and argue that they can be attributed to physical processes not captured in the model or in the dark matter-only simulations.

  6. Application of conditional simulation of heterogeneous rock properties to seismic scattering and attenuation analysis in gas hydrate reservoirs

    NASA Astrophysics Data System (ADS)

    Huang, Jun-Wei; Bellefleur, Gilles; Milkereit, Bernd

    2012-02-01

    We present a conditional simulation algorithm to parameterize three-dimensional heterogeneities and construct heterogeneous petrophysical reservoir models. The models match the data at borehole locations, simulate heterogeneities at the same resolution as borehole logging data elsewhere in the model space, and simultaneously honor the correlations among multiple rock properties. The model provides a heterogeneous environment in which a variety of geophysical experiments can be simulated. This includes the estimation of petrophysical properties and the study of geophysical response to the heterogeneities. As an example, we model the elastic properties of a gas hydrate accumulation located at Mallik, Northwest Territories, Canada. The modeled properties include compressional and shear-wave velocities that primarily depend on the saturation of hydrate in the pore space of the subsurface lithologies. We introduce the conditional heterogeneous petrophysical models into a finite difference modeling program to study seismic scattering and attenuation due to multi-scale heterogeneity. Similarities between resonance scattering analysis of synthetic and field Vertical Seismic Profile data reveal heterogeneity with a horizontal-scale of approximately 50 m in the shallow part of the gas hydrate interval. A cross-borehole numerical experiment demonstrates that apparent seismic energy loss can occur in a pure elastic medium without any intrinsic attenuation of hydrate-bearing sediments. This apparent attenuation is largely attributed to attenuative leaky mode propagation of seismic waves through large-scale gas hydrate occurrence as well as scattering from patchy distribution of gas hydrate.

  7. Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM

    NASA Technical Reports Server (NTRS)

    Crane, Robert G.; Hewitson, Bruce

    1990-01-01

    Model simulations of global climate change are seen as an essential component of any program aimed at understanding human impact on the global environment. A major weakness of current general circulation models (GCMs), however, is their inability to predict reliably the regional consequences of a global scale change, and it is these regional scale predictions that are necessary for studies of human/environmental response. This research is directed toward the development of a methodology for the validation of the synoptic scale climatology of GCMs. This is developed with regard to the Goddard Institute for Space Studies (GISS) GCM Model 2, with the specific objective of using the synoptic circulation form a doubles CO2 simulation to estimate regional climate change over North America, south of Hudson Bay. This progress report is specifically concerned with validating the synoptic climatology of the GISS GCM, and developing the transfer function to derive grid-point temperatures from the synoptic circulation. Principal Components Analysis is used to characterize the primary modes of the spatial and temporal variability in the observed and simulated climate, and the model validation is based on correlations between component loadings, and power spectral analysis of the component scores. The results show that the high resolution GISS model does an excellent job of simulating the synoptic circulation over the U.S., and that grid-point temperatures can be predicted with reasonable accuracy from the circulation patterns.

  8. Event detection and sub-state discovery from biomolecular simulations using higher-order statistics: application to enzyme adenylate kinase.

    PubMed

    Ramanathan, Arvind; Savol, Andrej J; Agarwal, Pratul K; Chennubhotla, Chakra S

    2012-11-01

    Biomolecular simulations at millisecond and longer time-scales can provide vital insights into functional mechanisms. Because post-simulation analyses of such large trajectory datasets can be a limiting factor in obtaining biological insights, there is an emerging need to identify key dynamical events and relating these events to the biological function online, that is, as simulations are progressing. Recently, we have introduced a novel computational technique, quasi-anharmonic analysis (QAA) (Ramanathan et al., PLoS One 2011;6:e15827), for partitioning the conformational landscape into a hierarchy of functionally relevant sub-states. The unique capabilities of QAA are enabled by exploiting anharmonicity in the form of fourth-order statistics for characterizing atomic fluctuations. In this article, we extend QAA for analyzing long time-scale simulations online. In particular, we present HOST4MD--a higher-order statistical toolbox for molecular dynamics simulations, which (1) identifies key dynamical events as simulations are in progress, (2) explores potential sub-states, and (3) identifies conformational transitions that enable the protein to access those sub-states. We demonstrate HOST4MD on microsecond timescale simulations of the enzyme adenylate kinase in its apo state. HOST4MD identifies several conformational events in these simulations, revealing how the intrinsic coupling between the three subdomains (LID, CORE, and NMP) changes during the simulations. Further, it also identifies an inherent asymmetry in the opening/closing of the two binding sites. We anticipate that HOST4MD will provide a powerful and extensible framework for detecting biophysically relevant conformational coordinates from long time-scale simulations. Copyright © 2012 Wiley Periodicals, Inc.

  9. Height bias and scale effect induced by antenna gravitational deformations in geodetic VLBI data analysis

    NASA Astrophysics Data System (ADS)

    Sarti, Pierguido; Abbondanza, Claudio; Petrov, Leonid; Negusini, Monia

    2011-01-01

    The impact of signal path variations (SPVs) caused by antenna gravitational deformations on geodetic very long baseline interferometry (VLBI) results is evaluated for the first time. Elevation-dependent models of SPV for Medicina and Noto (Italy) telescopes were derived from a combination of terrestrial surveying methods to account for gravitational deformations. After applying these models in geodetic VLBI data analysis, estimates of the antenna reference point positions are shifted upward by 8.9 and 6.7 mm, respectively. The impact on other parameters is negligible. To simulate the impact of antenna gravitational deformations on the entire VLBI network, lacking measurements for other telescopes, we rescaled the SPV models of Medicina and Noto for other antennas according to their size. The effects of the simulations are changes in VLBI heights in the range [-3, 73] mm and a net scale increase of 0.3-0.8 ppb. The height bias is larger than random errors of VLBI position estimates, implying the possibility of significant scale distortions related to antenna gravitational deformations. This demonstrates the need to precisely measure gravitational deformations of other VLBI telescopes, to derive their precise SPV models and to apply them in routine geodetic data analysis.

  10. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  11. A detailed model for simulation of catchment scale subsurface hydrologic processes

    NASA Technical Reports Server (NTRS)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  12. Analysis of Helium Segregation on Surfaces of Plasma-Exposed Tungsten

    NASA Astrophysics Data System (ADS)

    Maroudas, Dimitrios; Hu, Lin; Hammond, Karl; Wirth, Brian

    2015-11-01

    We report a systematic theoretical and atomic-scale computational study of implanted helium segregation on surfaces of tungsten, which is considered as a plasma facing component in nuclear fusion reactors. We employ a hierarchy of atomic-scale simulations, including molecular statics to understand the origin of helium surface segregation, targeted molecular-dynamics (MD) simulations of near-surface cluster reactions, and large-scale MD simulations of implanted helium evolution in plasma-exposed tungsten. We find that small, mobile helium clusters (of 1-7 He atoms) in the near-surface region are attracted to the surface due to an elastic interaction force. This thermodynamic driving force induces drift fluxes of these mobile clusters toward the surface, facilitating helium segregation. Moreover, the clusters' drift toward the surface enables cluster reactions, most importantly trap mutation, at rates much higher than in the bulk material. This cluster dynamics has significant effects on the surface morphology, near-surface defect structures, and the amount of helium retained in the material upon plasma exposure.

  13. Using large eddy simulations to reveal the size, strength, and phase of updraft and downdraft cores of an Arctic mixed-phase stratocumulus cloud

    DOE PAGES

    Roesler, Erika L.; Posselt, Derek J.; Rood, Richard B.

    2017-04-06

    Three-dimensional large eddy simulations (LES) are used to analyze a springtime Arctic mixed-phase stratocumulus observed on 26 April 2008 during the Indirect and Semi-Direct Aerosol Campaign. Two subgrid-scale turbulence parameterizations are compared. The first scheme is a 1.5-order turbulent kinetic energy (1.5-TKE) parameterization that has been previously applied to boundary layer cloud simulations. The second scheme, Cloud Layers Unified By Binormals (CLUBB), provides higher-order turbulent closure with scale awareness. The simulations, in comparisons with observations, show that both schemes produce the liquid profiles within measurement variability but underpredict ice water mass and overpredict ice number concentration. The simulation using CLUBBmore » underpredicted liquid water path more than the simulation using the 1.5-TKE scheme, so the turbulent length scale and horizontal grid box size were increased to increase liquid water path and reduce dissipative energy. The LES simulations show this stratocumulus cloud to maintain a closed cellular structure, similar to observations. The updraft and downdraft cores self-organize into a larger meso-γ-scale convective pattern with the 1.5-TKE scheme, but the cores remain more isotropic with the CLUBB scheme. Additionally, the cores are often composed of liquid and ice instead of exclusively containing one or the other. Furthermore, these results provide insight into traditionally unresolved and unmeasurable aspects of an Arctic mixed-phase cloud. From analysis, this cloud's updraft and downdraft cores appear smaller than other closed-cell stratocumulus such as midlatitude stratocumulus and Arctic autumnal mixed-phase stratocumulus due to the weaker downdrafts and lower precipitation rates.« less

  14. Diagnostic evaluation of the Community Earth System Model in simulating mineral dust emission with insight into large-scale dust storm mobilization in the Middle East and North Africa (MENA)

    NASA Astrophysics Data System (ADS)

    Parajuli, Sagar Prasad; Yang, Zong-Liang; Lawrence, David M.

    2016-06-01

    Large amounts of mineral dust are injected into the atmosphere during dust storms, which are common in the Middle East and North Africa (MENA) where most of the global dust hotspots are located. In this work, we present simulations of dust emission using the Community Earth System Model Version 1.2.2 (CESM 1.2.2) and evaluate how well it captures the spatio-temporal characteristics of dust emission in the MENA region with a focus on large-scale dust storm mobilization. We explicitly focus our analysis on the model's two major input parameters that affect the vertical mass flux of dust-surface winds and the soil erodibility factor. We analyze dust emissions in simulations with both prognostic CESM winds and with CESM winds that are nudged towards ERA-Interim reanalysis values. Simulations with three existing erodibility maps and a new observation-based erodibility map are also conducted. We compare the simulated results with MODIS satellite data, MACC reanalysis data, AERONET station data, and CALIPSO 3-d aerosol profile data. The dust emission simulated by CESM, when driven by nudged reanalysis winds, compares reasonably well with observations on daily to monthly time scales despite CESM being a global General Circulation Model. However, considerable bias exists around known high dust source locations in northwest/northeast Africa and over the Arabian Peninsula where recurring large-scale dust storms are common. The new observation-based erodibility map, which can represent anthropogenic dust sources that are not directly represented by existing erodibility maps, shows improved performance in terms of the simulated dust optical depth (DOD) and aerosol optical depth (AOD) compared to existing erodibility maps although the performance of different erodibility maps varies by region.

  15. Heart rate and performance during combat missions in a flight simulator.

    PubMed

    Lahtinen, Taija M M; Koskelo, Jukka P; Laitinen, Tomi; Leino, Tuomo K

    2007-04-01

    The psychological workload of flying has been shown to increase heart rate (HR) during flight simulator operation. The association between HR changes and flight performance remains unclear. There were 15 pilots who performed a combat flight mission in a Weapons Tactics Trainer simulator of an F-18 Hornet. An electrocardiogram (ECG) was recorded, and individual incremental heart rates (deltaHR) from the HR during rest were calculated for each flight phase and used in statistical analyses. The combat flight period was divided into 13 phases, which were evaluated on a scale of 1 to 5 by the flight instructor. HR increased during interceptions (from a mean resting level of 79.0 to mean value of 96.7 bpm in one of the interception flight phases) and decreased during the return to base and slightly increased during the ILS approach and landing. DeltaHR appeared to be similar among experienced and less experienced pilots. DeltaHR responses during the flight phases did not correlate with simulator flight performance scores. Overall simulator flight performance correlated statistically significantly (r = 0.50) with the F-18 Hornet flight experience. HR reflected the amount of cognitive load during the simulated flight. Hence, HR analysis can be used in the evaluation of the psychological workload of military simulator flight phases. However, more detailed flight performance evaluation methods are needed for this kind of complex flight simulation to replace the traditional but rough interval scales. Use of a visual analog scale by the flight instructors is suggested for simulator flight performance evaluation.

  16. Impact analysis of air gap motion with respect to parameters of mooring system for floating platform

    NASA Astrophysics Data System (ADS)

    Shen, Zhong-xiang; Huo, Fa-li; Nie, Yan; Liu, Yin-dong

    2017-04-01

    In this paper, the impact analysis of air gap concerning the parameters of mooring system for the semi-submersible platform is conducted. It is challenging to simulate the wave, current and wind loads of a platform based on a model test simultaneously. Furthermore, the dynamic equivalence between the truncated and full-depth mooring system is still a tuff work. However, the wind and current loads can be tested accurately in wind tunnel model. Furthermore, the wave can be simulated accurately in wave tank test. The full-scale mooring system and the all environment loads can be simulated accurately by using the numerical model based on the model tests simultaneously. In this paper, the air gap response of a floating platform is calculated based on the results of tunnel test and wave tank. Meanwhile, full-scale mooring system, the wind, wave and current load can be considered simultaneously. In addition, a numerical model of the platform is tuned and validated by ANSYS AQWA according to the model test results. With the support of the tuned numerical model, seventeen simulation cases about the presented platform are considered to study the wave, wind, and current loads simultaneously. Then, the impact analysis studies of air gap motion regarding the length, elasticity, and type of the mooring line are performed in the time domain under the beam wave, head wave, and oblique wave conditions.

  17. Effective Integration of Earth Observation Data and Flood Modeling for Rapid Disaster Response: The Texas 2015 Case

    NASA Astrophysics Data System (ADS)

    Schumann, G.

    2016-12-01

    Routinely obtaining real-time 2-D inundation patterns of a flood event at a meaningful spatial resolution and over large scales is at the moment only feasible with either operational aircraft flights or satellite imagery. Of course having model simulations of floodplain inundation available to complement the remote sensing data is highly desirable, for both event re-analysis and forecasting event inundation. Using the Texas 2015 flood disaster, we demonstrate the value of multi-scale EO data for large scale 2-D floodplain inundation modeling and forecasting. A dynamic re-analysis of the Texas 2015 flood disaster was run using a 2-D flood model developed for accurate large scale simulations. We simulated the major rivers entering the Gulf of Mexico and used flood maps produced from both optical and SAR satellite imagery to examine regional model sensitivities and assess associated performance. It was demonstrated that satellite flood maps can complement model simulations and add value, although this is largely dependent on a number of important factors, such as image availability, regional landscape topology, and model uncertainty. In the preferred case where model uncertainty is high, landscape topology is complex (i.e. urbanized coastal area) and satellite flood maps are available (in case of SAR for instance), satellite data can significantly reduce model uncertainty by identifying the "best possible" model parameter set. However, most often the situation is occurring where model uncertainty is low and spatially contiguous flooding can be mapped from satellites easily enough, such as in rural large inland river floodplains. Consequently, not much value from satellites can be added. Nevertheless, where a large number of flood maps are available, model credibility can be increased substantially. In the case presented here this was true for at least 60% of the many thousands of kilometers of river flow length simulated, where satellite flood maps existed. The next steps of this project is to employ a technique termed "targeted observation" approach, which is an assimilation based procedure that allows quantifying the impact observations have on model predictions at the local scale and also along the entire river system, when assimilated with the model at specific "overpass" locations.

  18. Analysis of large-scale tablet coating: Modeling, simulation and experiments.

    PubMed

    Boehling, P; Toschkoff, G; Knop, K; Kleinebudde, P; Just, S; Funke, A; Rehbaum, H; Khinast, J G

    2016-07-30

    This work concerns a tablet coating process in an industrial-scale drum coater. We set up a full-scale Design of Simulation Experiment (DoSE) using the Discrete Element Method (DEM) to investigate the influence of various process parameters (the spray rate, the number of nozzles, the rotation rate and the drum load) on the coefficient of inter-tablet coating variation (cv,inter). The coater was filled with up to 290kg of material, which is equivalent to 1,028,369 tablets. To mimic the tablet shape, the glued sphere approach was followed, and each modeled tablet consisted of eight spheres. We simulated the process via the eXtended Particle System (XPS), proving that it is possible to accurately simulate the tablet coating process on the industrial scale. The process time required to reach a uniform tablet coating was extrapolated based on the simulated data and was in good agreement with experimental results. The results are provided at various levels of details, from thorough investigation of the influence that the process parameters have on the cv,inter and the amount of tablets that visit the spray zone during the simulated 90s to the velocity in the spray zone and the spray and bed cycle time. It was found that increasing the number of nozzles and decreasing the spray rate had the highest influence on the cv,inter. Although increasing the drum load and the rotation rate increased the tablet velocity, it did not have a relevant influence on the cv,inter and the process time. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Abelian Higgs cosmic strings: Small-scale structure and loops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hindmarsh, Mark; Stuckey, Stephanie; Bevis, Neil

    2009-06-15

    Classical lattice simulations of the Abelian Higgs model are used to investigate small-scale structure and loop distributions in cosmic string networks. Use of the field theory ensures that the small-scale physics is captured correctly. The results confirm analytic predictions of Polchinski and Rocha 29 for the two-point correlation function of the string tangent vector, with a power law from length scales of order the string core width up to horizon scale. An analysis of the size distribution of string loops gives a very low number density, of order 1 per horizon volume, in contrast with Nambu-Goto simulations. Further, our loopmore » distribution function does not support the detailed analytic predictions for loop production derived by Dubath et al. 30. Better agreement to our data is found with a model based on loop fragmentation 32, coupled with a constant rate of energy loss into massive radiation. Our results show a strong energy-loss mechanism, which allows the string network to scale without gravitational radiation, but which is not due to the production of string width loops. From evidence of small-scale structure we argue a partial explanation for the scale separation problem of how energy in the very low frequency modes of the string network is transformed into the very high frequency modes of gauge and Higgs radiation. We propose a picture of string network evolution, which reconciles the apparent differences between Nambu-Goto and field theory simulations.« less

  20. SCALE Code System 6.2.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less

  1. Field scale test of multi-dimensional flow and morphodynamic simulations used for restoration design analysis

    USGS Publications Warehouse

    McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.

  2. Simulation for Supporting Scale-Up of a Fluidized Bed Reactor for Advanced Water Oxidation

    PubMed Central

    Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    Simulation of fluidized bed reactor (FBR) was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP). The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure) in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe3+ and Fe2+ mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40–90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment. PMID:25309949

  3. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  4. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100

    2015-01-15

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less

  5. Aeroacoustic and Performance Simulations of a Test Scale Open Rotor

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.

    2013-01-01

    This paper explores a comparison between experimental data and numerical simulations of the historical baseline F31/A31 open rotor geometry. The experimental data were obtained at the NASA Glenn Research Center s Aeroacoustic facility and include performance and noise information for a variety of flow speeds (matching take-off and cruise). The numerical simulations provide both performance and aeroacoustic results using the NUMECA s Fine-Turbo analysis code. A non-linear harmonic method is used to capture the rotor/rotor interaction.

  6. Hybrid network modeling and the effect of image resolution on digitally-obtained petrophysical and two-phase flow properties

    NASA Astrophysics Data System (ADS)

    Aghaei, A.

    2017-12-01

    Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.

  7. Main steam line break accident simulation of APR1400 using the model of ATLAS facility

    NASA Astrophysics Data System (ADS)

    Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.

    2018-02-01

    A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.

  8. Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut

    2017-01-01

    We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .

  9. A holistic approach for large-scale derived flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno

    2017-04-01

    Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.

  10. PyMOOSE: Interoperable Scripting in Python for MOOSE

    PubMed Central

    Ray, Subhasis; Bhalla, Upinder S.

    2008-01-01

    Python is emerging as a common scripting language for simulators. This opens up many possibilities for interoperability in the form of analysis, interfaces, and communications between simulators. We report the integration of Python scripting with the Multi-scale Object Oriented Simulation Environment (MOOSE). MOOSE is a general-purpose simulation system for compartmental neuronal models and for models of signaling pathways based on chemical kinetics. We show how the Python-scripting version of MOOSE, PyMOOSE, combines the power of a compiled simulator with the versatility and ease of use of Python. We illustrate this by using Python numerical libraries to analyze MOOSE output online, and by developing a GUI in Python/Qt for a MOOSE simulation. Finally, we build and run a composite neuronal/signaling model that uses both the NEURON and MOOSE numerical engines, and Python as a bridge between the two. Thus PyMOOSE has a high degree of interoperability with analysis routines, with graphical toolkits, and with other simulators. PMID:19129924

  11. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  12. An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon

    NASA Technical Reports Server (NTRS)

    Rutherford, Brian

    2000-01-01

    The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.

  13. New Distributed Multipole Methods for Accurate Electrostatics for Large-Scale Biomolecular Simultations

    NASA Astrophysics Data System (ADS)

    Sagui, Celeste

    2006-03-01

    An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.

  14. Parameterization Interactions in Global Aquaplanet Simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.

    2018-02-01

    Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.

  15. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  16. Inference from the small scales of cosmic shear with current and future Dark Energy Survey data

    DOE PAGES

    MacCrann, N.; Aleksić, J.; Amara, A.; ...

    2016-11-05

    Cosmic shear is sensitive to fluctuations in the cosmological matter density field, including on small physical scales, where matter clustering is affected by baryonic physics in galaxies and galaxy clusters, such as star formation, supernovae feedback and AGN feedback. While muddying any cosmological information that is contained in small scale cosmic shear measurements, this does mean that cosmic shear has the potential to constrain baryonic physics and galaxy formation. We perform an analysis of the Dark Energy Survey (DES) Science Verification (SV) cosmic shear measurements, now extended to smaller scales, and using the Mead et al. 2015 halo model tomore » account for baryonic feedback. While the SV data has limited statistical power, we demonstrate using a simulated likelihood analysis that the final DES data will have the statistical power to differentiate among baryonic feedback scenarios. We also explore some of the difficulties in interpreting the small scales in cosmic shear measurements, presenting estimates of the size of several other systematic effects that make inference from small scales difficult, including uncertainty in the modelling of intrinsic alignment on nonlinear scales, `lensing bias', and shape measurement selection effects. For the latter two, we make use of novel image simulations. While future cosmic shear datasets have the statistical power to constrain baryonic feedback scenarios, there are several systematic effects that require improved treatments, in order to make robust conclusions about baryonic feedback.« less

  17. Flow turbulence topology in regular porous media: From macroscopic to microscopic scale with direct numerical simulation

    NASA Astrophysics Data System (ADS)

    Chu, Xu; Weigand, Bernhard; Vaikuntanathan, Visakh

    2018-06-01

    Microscopic analysis of turbulence topology in a regular porous medium is presented with a series of direct numerical simulation. The regular porous media are comprised of square cylinders in a staggered array. Triply periodic boundary conditions enable efficient investigations in a representative elementary volume. Three flow patterns—channel with sudden contraction, impinging surface, and wake—are observed and studied quantitatively in contrast to the qualitative experimental studies reported in the literature. Among these, shear layers in the channel show the highest turbulence intensity due to a favorable pressure gradient and shed due to an adverse pressure gradient downstream. The turbulent energy budget indicates a strong production rate after the flow contraction and a strong dissipation on both shear and impinging walls. Energy spectra and pre-multiplied spectra detect large scale energetic structures in the shear layer and a breakup of scales in the impinging layer. However, these large scale structures break into less energetic small structures at high Reynolds number conditions. This suggests an absence of coherent structures in densely packed porous media at high Reynolds numbers. Anisotropy analysis with a barycentric map shows that the turbulence in porous media is highly isotropic in the macro-scale, which is not the case in the micro-scale. In the end, proper orthogonal decomposition is employed to distinguish the energy-conserving structures. The results support the pore scale prevalence hypothesis. However, energetic coherent structures are observed in the case with sparsely packed porous media.

  18. Comment on high resolution simulations of cosmic strings. 1: Network evoloution

    NASA Technical Reports Server (NTRS)

    Turok, Neil; Albrecht, Andreas

    1990-01-01

    Comments are made on recent claims (Albrecht and Turok, 1989) regarding simulations of cosmic string evolution. Specially, it was claimed that results were dominated by a numerical artifact which rounds out kinks on a scale of the order of the correlation length on the network. This claim was based on an approximate analysis of an interpolation equation which is solved herein. The typical rounding scale is actually less than one fifth of the correlation length, and comparable with other numerical cutoffs. Results confirm previous estimates of numerical uncertainties, and show that the approximations poorly represent the real solutions to the interpolation equation.

  19. Full-Scale Direct Numerical Simulation of Two- and Three-Dimensional Instabilities and Rivulet Formulation in Heated Falling Films

    NASA Technical Reports Server (NTRS)

    Krishnamoorthy, S.; Ramaswamy, B.; Joo, S. W.

    1995-01-01

    A thin film draining on an inclined plate has been studied numerically using finite element method. Three-dimensional governing equations of continuity, momentum and energy with a moving boundary are integrated in an arbitrary Lagrangian Eulerian frame of reference. Kinematic equation is solved to precisely update interface location. Rivulet formation based on instability mechanism has been simulated using full-scale computation. Comparisons with long-wave theory are made to validate the numerical scheme. Detailed analysis of two- and three-dimensional nonlinear wave formation and spontaneous rupture forming rivulets under the influence of combined thermocapillary and surface-wave instabilities is performed.

  20. Multiscale Analysis of the Water Content Output the NWP Model COSMO Over Switzerland and Comparison With Radar Data

    NASA Astrophysics Data System (ADS)

    Wolfensberger, D.; Gires, A.; Berne, A.; Tchiguirinskaia, I.; Schertzer, D. J. M.

    2015-12-01

    The resolution of operational numerical prediction models is typically of the order of a few kilometres meaning that small-scale features of precipitation can not be resolved explicitly. This creates the need for representative parametrizations of microphysical processes whose properties should be carefully analysed. In this study we will focus on the COSMO model which is a non-hydrostatic limited-area model, initially developed as the Lokal Model and used operationally in Switzerland and Germany. In its operational version, cloud microphysical processes are simulated with a one-moment bulk scheme where five hydrometeor classes are considered: cloud droplets, rain, ice crystals, snow, and graupel. A more sophisticated two-moment scheme is also available. The study will focus on two case studies: one in Payerne in western Switzerland in a relatively flat region and one in Davos in the eastern Swiss Alps in a more complex terrain.The objective of this work is to characterize the ability of the COSMO NWP model to reproduce the microphysics of precipitation across temporal and spatial scales as well as scaling variability. The characterization of COSMO outputs will rely on the Universal Multifractals framework, which allows to analyse and simulate geophysical fields extremely variabile over a wide range of scales with the help of a reduced number of parameters. First COSMO outputs are analysed; spatial multifractal analysis of 2D maps at various altitudes for each time steps are carried out for simulated solid, liquid, vapour and total water content. In general the fields exhibit a good quality of scaling on the whole range of available scales (2 km - 250 km), but some loss of scaling quality corresponding to the emergence of a scaling break are sometimes visible. This behaviour is not found at the same time or at the same altitude according to the water state and does not necessarily spread to the total water content. It is interpreted with the help of the underlying physical process at stake during the events.Second Multifractal comparisons of model outputs will also be made with radar data provided by the Meteo Swiss, both indirectly in terms of precipitation intensities and directly using a polarimetric forward radar operator which is able to simulate radar observations from model outputs.

  1. On the estimation and detection of the Rees-Sciama effect

    NASA Astrophysics Data System (ADS)

    Fullana, M. J.; Arnau, J. V.; Thacker, R. J.; Couchman, H. M. P.; Sáez, D.

    2017-02-01

    Maps of the Rees-Sciama (RS) effect are simulated using the parallel N-body code, HYDRA, and a run-time ray-tracing procedure. A method designed for the analysis of small, square cosmic microwave background (CMB) maps is applied to our RS maps. Each of these techniques has been tested and successfully applied in previous papers. Within a range of angular scales, our estimate of the RS angular power spectrum due to variations in the peculiar gravitational potential on scales smaller than 42/h megaparsecs is shown to be robust. An exhaustive study of the redshifts and spatial scales relevant for the production of RS anisotropy is developed for the first time. Results from this study demonstrate that (I) to estimate the full integrated RS effect, the initial redshift for the calculations (integration) must be greater than 25, (II) the effect produced by strongly non-linear structures is very small and peaks at angular scales close to 4.3 arcmin, and (III) the RS anisotropy cannot be detected either directly-in temperature CMB maps-or by looking for cross-correlations between these maps and tracers of the dark matter distribution. To estimate the RS effect produced by scales larger than 42/h megaparsecs, where the density contrast is not strongly non-linear, high accuracy N-body simulations appear unnecessary. Simulations based on approximations such as the Zel'dovich approximation and adhesion prescriptions, for example, may be adequate. These results can be used to guide the design of future RS simulations.

  2. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  3. Most suitable mother wavelet for the analysis of fractal properties of stride interval time series via the average wavelet coefficient

    PubMed Central

    Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan

    2016-01-01

    Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102

  4. Multiscale molecular dynamics simulations of rotary motor proteins.

    PubMed

    Ekimoto, Toru; Ikeguchi, Mitsunori

    2018-04-01

    Protein functions require specific structures frequently coupled with conformational changes. The scale of the structural dynamics of proteins spans from the atomic to the molecular level. Theoretically, all-atom molecular dynamics (MD) simulation is a powerful tool to investigate protein dynamics because the MD simulation is capable of capturing conformational changes obeying the intrinsically structural features. However, to study long-timescale dynamics, efficient sampling techniques and coarse-grained (CG) approaches coupled with all-atom MD simulations, termed multiscale MD simulations, are required to overcome the timescale limitation in all-atom MD simulations. Here, we review two examples of rotary motor proteins examined using free energy landscape (FEL) analysis and CG-MD simulations. In the FEL analysis, FEL is calculated as a function of reaction coordinates, and the long-timescale dynamics corresponding to conformational changes is described as transitions on the FEL surface. Another approach is the utilization of the CG model, in which the CG parameters are tuned using the fluctuation matching methodology with all-atom MD simulations. The long-timespan dynamics is then elucidated straightforwardly by using CG-MD simulations.

  5. Modelling and scale-up of chemical flooding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Lake, L.W.; Sepehrnoori, K.

    1990-03-01

    The objective of this research is to develop, validate, and apply a comprehensive chemical flooding simulator for chemical recovery processes involving surfactants, polymers, and alkaline chemicals in various combinations. This integrated program includes components of laboratory experiments, physical property modelling, scale-up theory, and numerical analysis as necessary and integral components of the simulation activity. We have continued to develop, test, and apply our chemical flooding simulator (UTCHEM) to a wide variety of laboratory and reservoir problems involving tracers, polymers, polymer gels, surfactants, and alkaline agents. Part I is an update on the Application of Higher-Order Methods in Chemical Flooding Simulation.more » This update focuses on the comparison of grid orientation effects for four different numerical methods implemented in UTCHEM. Part II is on Simulation Design Studies and is a continuation of Saad's Big Muddy surfactant pilot simulation study reported last year. Part III reports on the Simulation of Gravity Effects under conditions similar to those of some of the oil reservoirs in the North Sea. Part IV is on Determining Oil Saturation from Interwell Tracers UTCHEM is used for large-scale interwell tracer tests. A systematic procedure for estimating oil saturation from interwell tracer data is developed and a specific example based on the actual field data provided by Sun E P Co. is given. Part V reports on the Application of Vectorization and Microtasking for Reservoir Simulation. Part VI reports on Alkaline Simulation. The alkaline/surfactant/polymer flood compositional simulator (UTCHEM) reported last year is further extended to include reactions involving chemical species containing magnesium, aluminium and silicon as constituent elements. Part VII reports on permeability and trapping of microemulsion.« less

  6. Macro scale models for freight railroad terminals.

    DOT National Transportation Integrated Search

    2016-03-02

    The project has developed a yard capacity model for macro-level analysis. The study considers the detailed sequence and scheduling in classification yards and their impacts on yard capacities simulate typical freight railroad terminals, and statistic...

  7. A proposal of monitoring and forecasting system for crustal activity in and around Japan using a large-scale high-fidelity finite element simulation codes

    NASA Astrophysics Data System (ADS)

    Hori, Takane; Ichimura, Tsuyoshi; Takahashi, Narumi

    2017-04-01

    Here we propose a system for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. Although, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2015, SC15) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Fujita et al. (2016, SC16) has improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, AGU Fall Meeting) has improved the high-fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model.

  8. Parametric analyses of summative scores may lead to conflicting inferences when comparing groups: A simulation study.

    PubMed

    Khan, Asaduzzaman; Chien, Chi-Wen; Bagraith, Karl S

    2015-04-01

    To investigate whether using a parametric statistic in comparing groups leads to different conclusions when using summative scores from rating scales compared with using their corresponding Rasch-based measures. A Monte Carlo simulation study was designed to examine between-group differences in the change scores derived from summative scores from rating scales, and those derived from their corresponding Rasch-based measures, using 1-way analysis of variance. The degree of inconsistency between the 2 scoring approaches (i.e. summative and Rasch-based) was examined, using varying sample sizes, scale difficulties and person ability conditions. This simulation study revealed scaling artefacts that could arise from using summative scores rather than Rasch-based measures for determining the changes between groups. The group differences in the change scores were statistically significant for summative scores under all test conditions and sample size scenarios. However, none of the group differences in the change scores were significant when using the corresponding Rasch-based measures. This study raises questions about the validity of the inference on group differences of summative score changes in parametric analyses. Moreover, it provides a rationale for the use of Rasch-based measures, which can allow valid parametric analyses of rating scale data.

  9. A priori analysis of differential diffusion for model development for scale-resolving simulations

    NASA Astrophysics Data System (ADS)

    Hunger, Franziska; Dietzsch, Felix; Gauding, Michael; Hasse, Christian

    2018-01-01

    The present study analyzes differential diffusion and the mechanisms responsible for it with regard to the turbulent/nonturbulent interface (TNTI) with special focus on model development for scale-resolving simulations. In order to analyze differences between resolved and subfilter phenomena, direct numerical simulation (DNS) data are compared with explicitly filtered data. The DNS database stems from a temporally evolving turbulent plane jet transporting two passive scalars with Schmidt numbers of unity and 0.25 presented by Hunger et al. [F. Hunger et al., J. Fluid Mech. 802, R5 (2016), 10.1017/jfm.2016.471]. The objective of this research is twofold: (i) to compare the position of the turbulent-nonturbulent interface between the original DNS data and the filtered data and (ii) to analyze differential diffusion and the impact of the TNTI with regard to scale resolution in the filtered DNS data. For the latter, differential diffusion quantities are studied, clearly showing the decrease of differential diffusion at the resolved scales with increasing filter width. A transport equation for the scalar differences is evaluated. Finally, the existence of large scalar gradients, gradient alignment, and the diffusive fluxes being the physical mechanisms responsible for the separation of the two scalars are compared between the resolved and subfilter scales.

  10. Structural and electron diffraction scaling of twisted graphene bilayers

    NASA Astrophysics Data System (ADS)

    Zhang, Kuan; Tadmor, Ellad B.

    2018-03-01

    Multiscale simulations are used to study the structural relaxation in twisted graphene bilayers and the associated electron diffraction patterns. The initial twist forms an incommensurate moiré pattern that relaxes to a commensurate microstructure comprised of a repeating pattern of alternating low-energy AB and BA domains surrounding a high-energy AA domain. The simulations show that the relaxation mechanism involves a localized rotation and shrinking of the AA domains that scales in two regimes with the imposed twist. For small twisting angles, the localized rotation tends to a constant; for large twist, the rotation scales linearly with it. This behavior is tied to the inverse scaling of the moiré pattern size with twist angle and is explained theoretically using a linear elasticity model. The results are validated experimentally through a simulated electron diffraction analysis of the relaxed structures. A complex electron diffraction pattern involving the appearance of weak satellite peaks is predicted for the small twist regime. This new diffraction pattern is explained using an analytical model in which the relaxation kinematics are described as an exponentially-decaying (Gaussian) rotation field centered on the AA domains. Both the angle-dependent scaling and diffraction patterns are in quantitative agreement with experimental observations. A Matlab program for extracting the Gaussian model parameters accompanies this paper.

  11. Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations

    NASA Astrophysics Data System (ADS)

    Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod

    2016-11-01

    Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.

  12. Cluster Correspondence Analysis.

    PubMed

    van de Velden, M; D'Enza, A Iodice; Palumbo, F

    2017-03-01

    A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.

  13. Statistical downscaling of GCM simulations to streamflow using relevance vector machine

    NASA Astrophysics Data System (ADS)

    Ghosh, Subimal; Mujumdar, P. P.

    2008-01-01

    General circulation models (GCMs), the climate models often used in assessing the impact of climate change, operate on a coarse scale and thus the simulation results obtained from GCMs are not particularly useful in a comparatively smaller river basin scale hydrology. The article presents a methodology of statistical downscaling based on sparse Bayesian learning and Relevance Vector Machine (RVM) to model streamflow at river basin scale for monsoon period (June, July, August, September) using GCM simulated climatic variables. NCEP/NCAR reanalysis data have been used for training the model to establish a statistical relationship between streamflow and climatic variables. The relationship thus obtained is used to project the future streamflow from GCM simulations. The statistical methodology involves principal component analysis, fuzzy clustering and RVM. Different kernel functions are used for comparison purpose. The model is applied to Mahanadi river basin in India. The results obtained using RVM are compared with those of state-of-the-art Support Vector Machine (SVM) to present the advantages of RVMs over SVMs. A decreasing trend is observed for monsoon streamflow of Mahanadi due to high surface warming in future, with the CCSR/NIES GCM and B2 scenario.

  14. A new paradigm for atomically detailed simulations of kinetics in biophysical systems.

    PubMed

    Elber, Ron

    2017-01-01

    The kinetics of biochemical and biophysical events determined the course of life processes and attracted considerable interest and research. For example, modeling of biological networks and cellular responses relies on the availability of information on rate coefficients. Atomically detailed simulations hold the promise of supplementing experimental data to obtain a more complete kinetic picture. However, simulations at biological time scales are challenging. Typical computer resources are insufficient to provide the ensemble of trajectories at the correct length that is required for straightforward calculations of time scales. In the last years, new technologies emerged that make atomically detailed simulations of rate coefficients possible. Instead of computing complete trajectories from reactants to products, these approaches launch a large number of short trajectories at different positions. Since the trajectories are short, they are computed trivially in parallel on modern computer architecture. The starting and termination positions of the short trajectories are chosen, following statistical mechanics theory, to enhance efficiency. These trajectories are analyzed. The analysis produces accurate estimates of time scales as long as hours. The theory of Milestoning that exploits the use of short trajectories is discussed, and several applications are described.

  15. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H.-W.; Chang, N.-B., E-mail: nchang@mail.ucf.ed; Chen, J.-C.

    2010-07-15

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19more » large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.« less

  16. Predictive model for convective flows induced by surface reactivity contrast

    NASA Astrophysics Data System (ADS)

    Davidson, Scott M.; Lammertink, Rob G. H.; Mani, Ali

    2018-05-01

    Concentration gradients in a fluid adjacent to a reactive surface due to contrast in surface reactivity generate convective flows. These flows result from contributions by electro- and diffusio-osmotic phenomena. In this study, we have analyzed reactive patterns that release and consume protons, analogous to bimetallic catalytic conversion of peroxide. Similar systems have typically been studied using either scaling analysis to predict trends or costly numerical simulation. Here, we present a simple analytical model, bridging the gap in quantitative understanding between scaling relations and simulations, to predict the induced potentials and consequent velocities in such systems without the use of any fitting parameters. Our model is tested against direct numerical solutions to the coupled Poisson, Nernst-Planck, and Stokes equations. Predicted slip velocities from the model and simulations agree to within a factor of ≈2 over a multiple order-of-magnitude change in the input parameters. Our analysis can be used to predict enhancement of mass transport and the resulting impact on overall catalytic conversion, and is also applicable to predicting the speed of catalytic nanomotors.

  17. Study on ion energy distribution in low-frequency oscillation time scale of Hall thrusters

    NASA Astrophysics Data System (ADS)

    Wei, Liqiu; Li, Wenbo; Ding, Yongjie; Han, Liang; Yu, Daren; Cao, Yong

    2017-11-01

    This paper reports on the dynamic characteristics of the distribution of ion energy during Hall thruster discharge in the low-frequency oscillation time scale through experimental studies, and a statistical analysis of the time-varying peak and width of ion energy and the ratio of high-energy ions during the low-frequency oscillation. The results show that the ion energy distribution exhibits a periodic change during the low-frequency oscillation. Moreover, the variation in the ion energy peak is opposite to that of the discharge current, and the variations in width of the ion energy distribution and the ratio of high-energy ions are consistent with that of the discharge current. The variation characteristics of the ion density and discharge potential were simulated by one-dimensional hybrid-direct kinetic simulations; the simulation results and analysis indicate that the periodic change in the distribution of ion energy during the low-frequency oscillation depends on the relationship between the ionization source term and discharge potential distribution during ionization in the discharge channel.

  18. Modeling and Simulation of the Second-Generation Orion Crew Module Air Bag Landing System

    NASA Technical Reports Server (NTRS)

    Timmers, Richard B.; Welch, Joseph V.; Hardy, Robin C.

    2009-01-01

    Air bags were evaluated as the landing attenuation system for earth landing of the Orion Crew Module (CM). An important element of the air bag system design process is proper modeling of the proposed configuration to determine if the resulting performance meets requirements. Analysis conducted to date shows that airbags are capable of providing a graceful landing of the CM in nominal and off-nominal conditions such as parachute failure, high horizontal winds, and unfavorable vehicle/ground angle combinations. The efforts presented here surround a second generation of the airbag design developed by ILC Dover, and is based on previous design, analysis, and testing efforts. In order to fully evaluate the second generation air bag design and correlate the dynamic simulations, a series of drop tests were carried out at NASA Langley's Landing and Impact Research (LandIR) facility. The tests consisted of a full-scale set of air bags attached to a full-scale test article representing the Orion Crew Module. The techniques used to collect experimental data, construct the simulations, and make comparisons to experimental data are discussed.

  19. Modeling and Simulation of the Second-Generation Orion Crew Module Air Bag Landing System

    NASA Technical Reports Server (NTRS)

    Timmers, Richard B.; Hardy, Robin C.; Willey, Cliff E.; Welch, Joseph V.

    2009-01-01

    Air bags were evaluated as the landing attenuation system for earth landing of the Orion Crew Module (CM). Analysis conducted to date shows that airbags are capable of providing a graceful landing of the CM in nominal and off-nominal conditions such as parachute failure, high horizontal winds, and unfavorable vehicle/ground angle combinations, while meeting crew and vehicle safety requirements. The analyses and associated testing presented here surround a second generation of the airbag design developed by ILC Dover, building off of relevant first-generation design, analysis, and testing efforts. In order to fully evaluate the second generation air bag design and correlate the dynamic simulations, a series of drop tests were carried out at NASA Langley s Landing and Impact Research (LandIR) facility in Hampton, Virginia. The tests consisted of a full-scale set of air bags attached to a full-scale test article representing the Orion Crew Module. The techniques used to collect experimental data, develop the simulations, and make comparisons to experimental data are discussed.

  20. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  1. Synoptic scale forecast skill and systematic errors in the MASS 2.0 model. [Mesoscale Atmospheric Simulation System

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.

    1985-01-01

    The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.

  2. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  3. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  4. Application of the Geophysical Scale Multi-Block Transport Modeling System to Hydrodynamic Forcing of Dredged Material Placement Sediment Transport within the James River Estuary

    NASA Astrophysics Data System (ADS)

    Kim, S. C.; Hayter, E. J.; Pruhs, R.; Luong, P.; Lackey, T. C.

    2016-12-01

    The geophysical scale circulation of the Mid Atlantic Bight and hydrologic inputs from adjacent Chesapeake Bay watersheds and tributaries influences the hydrodynamics and transport of the James River estuary. Both barotropic and baroclinic transport govern the hydrodynamics of this partially stratified estuary. Modeling the placement of dredged sediment requires accommodating this wide spectrum of atmospheric and hydrodynamic scales. The Geophysical Scale Multi-Block (GSMB) Transport Modeling System is a collection of multiple well established and USACE approved process models. Taking advantage of the parallel computing capability of multi-block modeling, we performed one year three-dimensional modeling of hydrodynamics in supporting simulation of dredged sediment placements transport and morphology changes. Model forcing includes spatially and temporally varying meteorological conditions and hydrological inputs from the watershed. Surface heat flux estimates were derived from the National Solar Radiation Database (NSRDB). The open water boundary condition for water level was obtained from an ADCIRC model application of the U. S. East Coast. Temperature-salinity boundary conditions were obtained from the Environmental Protection Agency (EPA) Chesapeake Bay Program (CBP) long-term monitoring stations database. Simulated water levels were calibrated and verified by comparison with National Oceanic and Atmospheric Administration (NOAA) tide gage locations. A harmonic analysis of the modeled tides was performed and compared with NOAA tide prediction data. In addition, project specific circulation was verified using US Army Corps of Engineers (USACE) drogue data. Salinity and temperature transport was verified at seven CBP long term monitoring stations along the navigation channel. Simulation and analysis of model results suggest that GSMB is capable of resolving the long duration, multi-scale processes inherent to practical engineering problems such as dredged material placement stability.

  5. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  6. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 3; Aero-Acoustic Analyses and Experimental Validation

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; McVay, Greg P.; Langford, Lester L.

    2008-01-01

    A unique assessment of acoustic similarity scaling laws and acoustic analogy methodologies in predicting the far-field acoustic signature from a sub-scale altitude rocket test facility at the NASA Stennis Space Center was performed. A directional, point-source similarity analysis was implemented for predicting the acoustic far-field. In this approach, experimental acoustic data obtained from "similar" rocket engine tests were appropriately scaled using key geometric and dynamic parameters. The accuracy of this engineering-level method is discussed by comparing the predictions with acoustic far-field measurements obtained. In addition, a CFD solver was coupled with a Lilley's acoustic analogy formulation to determine the improvement of using a physics-based methodology over an experimental correlation approach. In the current work, steady-state Reynolds-averaged Navier-Stokes calculations were used to model the internal flow of the rocket engine and altitude diffuser. These internal flow simulations provided the necessary realistic input conditions for external plume simulations. The CFD plume simulations were then used to provide the spatial turbulent noise source distributions in the acoustic analogy calculations. Preliminary findings of these studies will be discussed.

  7. Far Sidelobe Effects from Panel Gaps of the Atacama Cosmology Telescope

    NASA Technical Reports Server (NTRS)

    Fluxa, Pedro R.; Duenner, Rolando; Maurin, Loiec; Choi, Steve K.; Devlin, Mark J.; Gallardo, Patricio A.; Shuay-Pwu, P. Ho; Koopman, Brian J.; Louis, Thibaut; Wollack, Edward J.

    2016-01-01

    The Atacama Cosmology Telescope is a 6 meter diameter CMB telescope located at 5200 meters in the Chilean desert. ACT has made arc-minute scale maps of the sky at 90 and 150 GHz which have led to precise measurements of the fine angular power spectrum of the CMB fluctuations in temperature and polarization. One of the goals of ACT is to search for the B-mode polarization signal from primordial gravity waves, and thus extending ACT's data analysis to larger angular scales. This goal introduces new challenges in the control of systematic effects, including better understanding of far sidelobe effects that might enter the power spectrum at degree angular scales. Here we study the effects of the gaps between panels of the ACT primary and secondary reflectors in the worst case scenario in which the gaps remain open. We produced numerical simulations of the optics using GRASP up to 8 degrees away from the main beam and simulated timestreams for observations with this beam using real pointing information from ACT data. Maps from these simulated timestreams showed leakage from the sidelobes, indicating that this effect must be taken into consideration at large angular scales.

  8. Formation and Reconnection of Three-dimensional Current Sheets with a Guide Field in the Solar Corona

    NASA Astrophysics Data System (ADS)

    Edmondson, J. K.; Lynch, B. J.

    2017-11-01

    We analyze a series of three-dimensional magnetohydrodynamic numerical simulations of magnetic reconnection in a model solar corona to study the effect of the guide-field component on quasi-steady-state interchange reconnection in a pseudostreamer arcade configuration. This work extends the analysis of Edmondson et al. by quantifying the mass density enhancement coherency scale in the current sheet associated with magnetic island formation during the nonlinear phase of plasmoid-unstable reconnection. We compare the results of four simulations of a zero, weak, moderate, and a strong guide field, {B}{GF}/{B}0=\\{0.0,0.1,0.5,1.0\\}, to quantify the plasmoid density enhancement’s longitudinal and transverse coherency scales as a function of the guide-field strength. We derive these coherency scales from autocorrelation and wavelet analyses, and demonstrate how these scales may be used to interpret the density enhancement fluctuation’s Fourier power spectra in terms of a structure formation range, an energy continuation range, and an inertial range—each population with a distinct spectral slope. We discuss the simulation results in the context of solar and heliospheric observations of pseudostreamer solar wind outflow and possible signatures of reconnection-generated structure.

  9. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator

    PubMed Central

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane

    2018-01-01

    Introduction Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. Material and methods The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. Results A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). Conclusion The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room. PMID:29293635

  10. Analysis of sensor network observations during some simulated landslide experiments

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Lu, P.; Feng, T.; Chen, W.; Wu, H.; Qiao, G.; Liu, C.; Tong, X.; Li, R.

    2012-12-01

    A multi-sensor network was tested during some experiments on a landslide simulation platform established at Tongji University (Shanghai, P.R. China). Here landslides were triggered by means of artificial rainfall (see Figure 1). The sensor network currently incorporates contact sensors and two imaging systems. This represent a novel solution, because the spatial sensor network incorporate either contact sensors and remote sensors (video-cameras). In future, these sensors will be installed on two real ground slopes in Sichuan province (South-West China), where Wenchuan earthquake occurred in 2008. This earthquake caused the immediate activation of several landslide, while other area became unstable and still are a menace for people and properties. The platform incorporates the reconstructed scale slope, sensor network, communication system, database and visualization system. Some landslide simulation experiments allowed ascertaining which sensors could be more suitable to be deployed in Wenchuan area. The poster will focus on the analysis of results coming from down scale simulations. Here the different steps of the landslide evolution can be followed on the basis of sensor observations. This include underground sensors to detect the water table level and the pressure in the ground, a set of accelerometers and two inclinometers. In the first part of the analysis the full data series are investigated to look for correlations and common patterns, as well as to link them to the physical processes. In the second, 4 subsets of sensors located in neighbor positions are analyzed. The analysis of low- and high-speed image sequences allowed to track a dense field of displacement on the slope surface. These outcomes have been compared to the ones obtained from accelerometers for cross-validation. Images were also used for the photogrammetric reconstruction of the slope topography during the experiment. Consequently, volume computation and mass movements could be evaluated on the basis of processed images.; Figure 1 - The landslide simulation platform at Tongji University at the end of an experiment. The picture shows the body of simulated landslide.

  11. The RAVEN Toolbox and Its Use for Generating a Genome-scale Metabolic Model for Penicillium chrysogenum

    PubMed Central

    Agren, Rasmus; Liu, Liming; Shoaie, Saeed; Vongsangnak, Wanwipa; Nookaew, Intawat; Nielsen, Jens

    2013-01-01

    We present the RAVEN (Reconstruction, Analysis and Visualization of Metabolic Networks) Toolbox: a software suite that allows for semi-automated reconstruction of genome-scale models. It makes use of published models and/or the KEGG database, coupled with extensive gap-filling and quality control features. The software suite also contains methods for visualizing simulation results and omics data, as well as a range of methods for performing simulations and analyzing the results. The software is a useful tool for system-wide data analysis in a metabolic context and for streamlined reconstruction of metabolic networks based on protein homology. The RAVEN Toolbox workflow was applied in order to reconstruct a genome-scale metabolic model for the important microbial cell factory Penicillium chrysogenum Wisconsin54-1255. The model was validated in a bibliomic study of in total 440 references, and it comprises 1471 unique biochemical reactions and 1006 ORFs. It was then used to study the roles of ATP and NADPH in the biosynthesis of penicillin, and to identify potential metabolic engineering targets for maximization of penicillin production. PMID:23555215

  12. Free energy landscape of the Michaelis complex of lactate dehydrogenase: A network analysis of atomistic simulations

    NASA Astrophysics Data System (ADS)

    Pan, Xiaoliang; Schwartz, Steven

    2015-03-01

    It has long been recognized that the structure of a protein is a hierarchy of conformations interconverting on multiple time scales. However, the conformational heterogeneity is rarely considered in the context of enzymatic catalysis in which the reactant is usually represented by a single conformation of the enzyme/substrate complex. Lactate dehydrogenase (LDH) catalyzes the interconversion of pyruvate and lactate with concomitant interconversion of two forms of the cofactor nicotinamide adenine dinucleotide (NADH and NAD+). Recent experimental results suggest that multiple substates exist within the Michaelis complex of LDH, and they are catalytic competent at different reaction rates. In this study, millisecond-scale all-atom molecular dynamics simulations were performed on LDH to explore the free energy landscape of the Michaelis complex, and network analysis was used to characterize the distribution of the conformations. Our results provide a detailed view of the kinetic network the Michaelis complex and the structures of the substates at atomistic scale. It also shed some light on understanding the complete picture of the catalytic mechanism of LDH.

  13. Free energy surface of the Michaelis complex of lactate dehydrogenase: a network analysis of microsecond simulations.

    PubMed

    Pan, Xiaoliang; Schwartz, Steven D

    2015-04-30

    It has long been recognized that the structure of a protein creates a hierarchy of conformations interconverting on multiple time scales. The conformational heterogeneity of the Michaelis complex is of particular interest in the context of enzymatic catalysis in which the reactant is usually represented by a single conformation of the enzyme/substrate complex. Lactate dehydrogenase (LDH) catalyzes the interconversion of pyruvate and lactate with concomitant interconversion of two forms of the cofactor nicotinamide adenine dinucleotide (NADH and NAD(+)). Recent experimental results suggest that multiple substates exist within the Michaelis complex of LDH, and they show a strong variance in their propensity toward the on-enzyme chemical step. In this study, microsecond-scale all-atom molecular dynamics simulations were performed on LDH to explore the free energy landscape of the Michaelis complex, and network analysis was used to characterize the distribution of the conformations. Our results provide a detailed view of the kinetic network of the Michaelis complex and the structures of the substates at atomistic scales. They also shed light on the complete picture of the catalytic mechanism of LDH.

  14. Combustion performance and scale effect from N2O/HTPB hybrid rocket motor simulations

    NASA Astrophysics Data System (ADS)

    Shan, Fanli; Hou, Lingyun; Piao, Ying

    2013-04-01

    HRM code for the simulation of N2O/HTPB hybrid rocket motor operation and scale effect analysis has been developed. This code can be used to calculate motor thrust and distributions of physical properties inside the combustion chamber and nozzle during the operational phase by solving the unsteady Navier-Stokes equations using a corrected compressible difference scheme and a two-step, five species combustion model. A dynamic fuel surface regression technique and a two-step calculation method together with the gas-solid coupling are applied in the calculation of fuel regression and the determination of combustion chamber wall profile as fuel regresses. Both the calculated motor thrust from start-up to shut-down mode and the combustion chamber wall profile after motor operation are in good agreements with experimental data. The fuel regression rate equation and the relation between fuel regression rate and axial distance have been derived. Analysis of results suggests improvements in combustion performance to the current hybrid rocket motor design and explains scale effects in the variation of fuel regression rate with combustion chamber diameter.

  15. A multiscale approach to accelerate pore-scale simulation of porous electrodes

    NASA Astrophysics Data System (ADS)

    Zheng, Weibo; Kim, Seung Hyun

    2017-04-01

    A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.

  16. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE PAGES

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...

    2016-10-20

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  17. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  18. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    PubMed Central

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187

  19. The role of ecosystem-atmosphere interactions in simulated Amazonian precipitation decrease and forest dieback under global climate warming

    NASA Astrophysics Data System (ADS)

    Betts, R. A.; Cox, P. M.; Collins, M.; Harris, P. P.; Huntingford, C.; Jones, C. D.

    A suite of simulations with the HadCM3LC coupled climate-carbon cycle model is used to examine the various forcings and feedbacks involved in the simulated precipitation decrease and forest dieback. Rising atmospheric CO2 is found to contribute 20% to the precipitation reduction through the physiological forcing of stomatal closure, with 80% of the reduction being seen when stomatal closure was excluded and only radiative forcing by CO2 was included. The forest dieback exerts two positive feedbacks on the precipitation reduction; a biogeophysical feedback through reduced forest cover suppressing local evaporative water recycling, and a biogeochemical feedback through the release of CO2 contributing to an accelerated global warming. The precipitation reduction is enhanced by 20% by the biogeophysical feedback, and 5% by the carbon cycle feedback from the forest dieback. This analysis helps to explain why the Amazonian precipitation reduction simulated by HadCM3LC is more extreme than that simulated in other GCMs; in the fully-coupled, climate-carbon cycle simulation, approximately half of the precipitation reduction in Amazonia is attributable to a combination of physiological forcing and biogeophysical and global carbon cycle feedbacks, which are generally not included in other GCM simulations of future climate change. The analysis also demonstrates the potential contribution of regional-scale climate and ecosystem change to uncertainties in global CO2 and climate change projections. Moreover, the importance of feedbacks suggests that a human-induced increase in forest vulnerability to climate change may have implications for regional and global scale climate sensitivity.

  20. Multiscale combination of climate model simulations and proxy records over the last millennium

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Tian, Qinhua

    2018-05-01

    To highlight the compatibility of climate model simulation and proxy reconstruction at different timescales, a timescale separation merging method combining proxy records and climate model simulations is presented. Annual mean surface temperature anomalies for the last millennium (851-2005 AD) at various scales over the land of the Northern Hemisphere were reconstructed with 2° × 2° spatial resolution, using an optimal interpolation (OI) algorithm. All target series were decomposed using an ensemble empirical mode decomposition method followed by power spectral analysis. Four typical components were obtained at inter-annual, decadal, multidecadal, and centennial timescales. A total of 323 temperature-sensitive proxy chronologies were incorporated after screening for each component. By scaling the proxy components using variance matching and applying a localized OI algorithm to all four components point by point, we obtained merged surface temperatures. Independent validation indicates that the most significant improvement was for components at the inter-annual scale, but this became less evident with increasing timescales. In mid-latitude land areas, 10-30% of grids were significantly corrected at the inter-annual scale. By assimilating the proxy records, the merged results reduced the gap in response to volcanic forcing between a pure reconstruction and simulation. Difficulty remained in verifying the centennial information and quantifying corresponding uncertainties, so additional effort should be devoted to this aspect in future research.

  1. Unraveling Hydrophobic Interactions at the Molecular Scale Using Force Spectroscopy and Molecular Dynamics Simulations.

    PubMed

    Stock, Philipp; Monroe, Jacob I; Utzig, Thomas; Smith, David J; Shell, M Scott; Valtiner, Markus

    2017-03-28

    Interactions between hydrophobic moieties steer ubiquitous processes in aqueous media, including the self-organization of biologic matter. Recent decades have seen tremendous progress in understanding these for macroscopic hydrophobic interfaces. Yet, it is still a challenge to experimentally measure hydrophobic interactions (HIs) at the single-molecule scale and thus to compare with theory. Here, we present a combined experimental-simulation approach to directly measure and quantify the sequence dependence and additivity of HIs in peptide systems at the single-molecule scale. We combine dynamic single-molecule force spectroscopy on model peptides with fully atomistic, both equilibrium and nonequilibrium, molecular dynamics (MD) simulations of the same systems. Specifically, we mutate a flexible (GS) 5 peptide scaffold with increasing numbers of hydrophobic leucine monomers and measure the peptides' desorption from hydrophobic self-assembled monolayer surfaces. Based on the analysis of nonequilibrium work-trajectories, we measure an interaction free energy that scales linearly with 3.0-3.4 k B T per leucine. In good agreement, simulations indicate a similar trend with 2.1 k B T per leucine, while also providing a detailed molecular view into HIs. This approach potentially provides a roadmap for directly extracting qualitative and quantitative single-molecule interactions at solid/liquid interfaces in a wide range of fields, including interactions at biointerfaces and adhesive interactions in industrial applications.

  2. [Research progress and development trend of quantitative assessment techniques for urban thermal environment.

    PubMed

    Sun, Tie Gang; Xiao, Rong Bo; Cai, Yun Nan; Wang, Yao Wu; Wu, Chang Guang

    2016-08-01

    Quantitative assessment of urban thermal environment has become a focus for urban climate and environmental science since the concept of urban heat island has been proposed. With the continual development of space information and computer simulation technology, substantial progresses have been made on quantitative assessment techniques and methods of urban thermal environment. The quantitative assessment techniques have been developed to dynamics simulation and forecast of thermal environment at various scales based on statistical analysis of thermal environment on urban-scale using the historical data of weather stations. This study reviewed the development progress of ground meteorological observation, thermal infrared remote sensing and numerical simulation. Moreover, the potential advantages and disadvantages, applicability and the development trends of these techniques were also summarized, aiming to add fundamental knowledge of understanding the urban thermal environment assessment and optimization.

  3. Multidimensional Multiphysics Simulation of TRISO Particle Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Hales; R. L. Williamson; S. R. Novascone

    2013-11-01

    Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical andmore » material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.« less

  4. A system for automatic evaluation of simulation software

    NASA Technical Reports Server (NTRS)

    Ryan, J. P.; Hodges, B. C.

    1976-01-01

    Within the field of computer software, simulation and verification are complementary processes. Simulation methods can be used to verify software by performing variable range analysis. More general verification procedures, such as those described in this paper, can be implicitly, viewed as attempts at modeling the end-product software. From software requirement methodology, each component of the verification system has some element of simulation to it. Conversely, general verification procedures can be used to analyze simulation software. A dynamic analyzer is described which can be used to obtain properly scaled variables for an analog simulation, which is first digitally simulated. In a similar way, it is thought that the other system components and indeed the whole system itself have the potential of being effectively used in a simulation environment.

  5. Detrended fluctuation analysis as a regression framework: Estimating dependence at different scales

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2015-02-01

    We propose a framework combining detrended fluctuation analysis with standard regression methodology. The method is built on detrended variances and covariances and it is designed to estimate regression parameters at different scales and under potential nonstationarity and power-law correlations. The former feature allows for distinguishing between effects for a pair of variables from different temporal perspectives. The latter ones make the method a significant improvement over the standard least squares estimation. Theoretical claims are supported by Monte Carlo simulations. The method is then applied on selected examples from physics, finance, environmental science, and epidemiology. For most of the studied cases, the relationship between variables of interest varies strongly across scales.

  6. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  7. Sunyaev-Zel'dovich Effect and X-ray Scaling Relations from Weak-Lensing Mass Calibration of 32 SPT Selected Galaxy Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietrich, J.P.; et al.

    Uncertainty in the mass-observable scaling relations is currently the limiting factor for galaxy cluster based cosmology. Weak gravitational lensing can provide a direct mass calibration and reduce the mass uncertainty. We present new ground-based weak lensing observations of 19 South Pole Telescope (SPT) selected clusters and combine them with previously reported space-based observations of 13 galaxy clusters to constrain the cluster mass scaling relations with the Sunyaev-Zel'dovich effect (SZE), the cluster gas massmore » $$M_\\mathrm{gas}$$, and $$Y_\\mathrm{X}$$, the product of $$M_\\mathrm{gas}$$ and X-ray temperature. We extend a previously used framework for the analysis of scaling relations and cosmological constraints obtained from SPT-selected clusters to make use of weak lensing information. We introduce a new approach to estimate the effective average redshift distribution of background galaxies and quantify a number of systematic errors affecting the weak lensing modelling. These errors include a calibration of the bias incurred by fitting a Navarro-Frenk-White profile to the reduced shear using $N$-body simulations. We blind the analysis to avoid confirmation bias. We are able to limit the systematic uncertainties to 6.4% in cluster mass (68% confidence). Our constraints on the mass-X-ray observable scaling relations parameters are consistent with those obtained by earlier studies, and our constraints for the mass-SZE scaling relation are consistent with the the simulation-based prior used in the most recent SPT-SZ cosmology analysis. We can now replace the external mass calibration priors used in previous SPT-SZ cosmology studies with a direct, internal calibration obtained on the same clusters.« less

  8. Process metallurgy simulation for metal drawing process optimization by using two-scale finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamachi, Eiji; Yoshida, Takashi; Yamaguchi, Toshihiko

    2014-10-06

    We developed two-scale FE analysis procedure based on the crystallographic homogenization method by considering the hierarchical structure of poly-crystal aluminium alloy metal. It can be characterized as the combination of two-scale structure, such as the microscopic polycrystal structure and the macroscopic elastic plastic continuum. Micro polycrystal structure can be modeled as a three dimensional representative volume element (RVE). RVE is featured as by 3×3×3 eight-nodes solid finite elements, which has 216 crystal orientations. This FE analysis code can predict the deformation, strain and stress evolutions in the wire drawing processes in the macro- scales, and further the crystal texture andmore » hardening evolutions in the micro-scale. In this study, we analyzed the texture evolution in the wire drawing processes by our two-scale FE analysis code under conditions of various drawing angles of dice. We evaluates the texture evolution in the surface and center regions of the wire cross section, and to clarify the effects of processing conditions on the texture evolution.« less

  9. Process metallurgy simulation for metal drawing process optimization by using two-scale finite element method

    NASA Astrophysics Data System (ADS)

    Nakamachi, Eiji; Yoshida, Takashi; Kuramae, Hiroyuki; Morimoto, Hideo; Yamaguchi, Toshihiko; Morita, Yusuke

    2014-10-01

    We developed two-scale FE analysis procedure based on the crystallographic homogenization method by considering the hierarchical structure of poly-crystal aluminium alloy metal. It can be characterized as the combination of two-scale structure, such as the microscopic polycrystal structure and the macroscopic elastic plastic continuum. Micro polycrystal structure can be modeled as a three dimensional representative volume element (RVE). RVE is featured as by 3×3×3 eight-nodes solid finite elements, which has 216 crystal orientations. This FE analysis code can predict the deformation, strain and stress evolutions in the wire drawing processes in the macro- scales, and further the crystal texture and hardening evolutions in the micro-scale. In this study, we analyzed the texture evolution in the wire drawing processes by our two-scale FE analysis code under conditions of various drawing angles of dice. We evaluates the texture evolution in the surface and center regions of the wire cross section, and to clarify the effects of processing conditions on the texture evolution.

  10. A role for self-gravity at multiple length scales in the process of star formation.

    PubMed

    Goodman, Alyssa A; Rosolowsky, Erik W; Borkin, Michelle A; Foster, Jonathan B; Halle, Michael; Kauffmann, Jens; Pineda, Jaime E

    2009-01-01

    Self-gravity plays a decisive role in the final stages of star formation, where dense cores (size approximately 0.1 parsecs) inside molecular clouds collapse to form star-plus-disk systems. But self-gravity's role at earlier times (and on larger length scales, such as approximately 1 parsec) is unclear; some molecular cloud simulations that do not include self-gravity suggest that 'turbulent fragmentation' alone is sufficient to create a mass distribution of dense cores that resembles, and sets, the stellar initial mass function. Here we report a 'dendrogram' (hierarchical tree-diagram) analysis that reveals that self-gravity plays a significant role over the full range of possible scales traced by (13)CO observations in the L1448 molecular cloud, but not everywhere in the observed region. In particular, more than 90 per cent of the compact 'pre-stellar cores' traced by peaks of dust emission are projected on the sky within one of the dendrogram's self-gravitating 'leaves'. As these peaks mark the locations of already-forming stars, or of those probably about to form, a self-gravitating cocoon seems a critical condition for their existence. Turbulent fragmentation simulations without self-gravity-even of unmagnetized isothermal material-can yield mass and velocity power spectra very similar to what is observed in clouds like L1448. But a dendrogram of such a simulation shows that nearly all the gas in it (much more than in the observations) appears to be self-gravitating. A potentially significant role for gravity in 'non-self-gravitating' simulations suggests inconsistency in simulation assumptions and output, and that it is necessary to include self-gravity in any realistic simulation of the star-formation process on subparsec scales.

  11. Water Balance in the Amazon Basin from a Land Surface Model Ensemble

    NASA Technical Reports Server (NTRS)

    Getirana, Augusto C. V.; Dutra, Emanuel; Guimberteau, Matthieu; Kam, Jonghun; Li, Hong-Yi; Decharme, Bertrand; Zhang, Zhengqiu; Ducharne, Agnes; Boone, Aaron; Balsamo, Gianpaolo; hide

    2014-01-01

    Despite recent advances in land surfacemodeling and remote sensing, estimates of the global water budget are still fairly uncertain. This study aims to evaluate the water budget of the Amazon basin based on several state-ofthe- art land surface model (LSM) outputs. Water budget variables (terrestrial water storage TWS, evapotranspiration ET, surface runoff R, and base flow B) are evaluated at the basin scale using both remote sensing and in situ data. Meteorological forcings at a 3-hourly time step and 18 spatial resolution were used to run 14 LSMs. Precipitation datasets that have been rescaled to matchmonthly Global Precipitation Climatology Project (GPCP) andGlobal Precipitation Climatology Centre (GPCC) datasets and the daily Hydrologie du Bassin de l'Amazone (HYBAM) dataset were used to perform three experiments. The Hydrological Modeling and Analysis Platform (HyMAP) river routing scheme was forced with R and B and simulated discharges are compared against observations at 165 gauges. Simulated ET and TWS are compared against FLUXNET and MOD16A2 evapotranspiration datasets andGravity Recovery and ClimateExperiment (GRACE)TWSestimates in two subcatchments of main tributaries (Madeira and Negro Rivers).At the basin scale, simulated ET ranges from 2.39 to 3.26 mm day(exp -1) and a low spatial correlation between ET and precipitation indicates that evapotranspiration does not depend on water availability over most of the basin. Results also show that other simulated water budget components vary significantly as a function of both the LSM and precipitation dataset, but simulated TWS generally agrees with GRACE estimates at the basin scale. The best water budget simulations resulted from experiments using HYBAM, mostly explained by a denser rainfall gauge network and the rescaling at a finer temporal scale.

  12. Toward a better integration of roughness in rockfall simulations - a sensitivity study with the RockyFor3D model

    NASA Astrophysics Data System (ADS)

    Monnet, Jean-Matthieu; Bourrier, Franck; Milenkovic, Milutin

    2017-04-01

    Advances in numerical simulation and analysis of real-size field experiments have supported the development of process-based rockfall simulation models. Availability of high resolution remote sensing data and high-performance computing now make it possible to implement them for operational applications, e.g. risk zoning and protection structure design. One key parameter regarding rock propagation is the surface roughness, sometimes defined as the variation in height perpendicular to the slope (Pfeiffer and Bowen, 1989). Roughness-related input parameters for rockfall models are usually determined by experts on the field. In the RockyFor3D model (Dorren, 2015), three values related to the distribution of obstacles (deposited rocks, stumps, fallen trees,... as seen from the incoming rock) relatively to the average slope are estimated. The use of high resolution digital terrain models (DTMs) questions both the scale usually adopted by experts for roughness assessment and the relevance of modeling hypotheses regarding the rock / ground interaction. Indeed, experts interpret the surrounding terrain as obstacles or ground depending on the overall visibility and on the nature of objects. Digital models represent the terrain with a certain amount of smoothing, depending on the sensor capacities. Besides, the rock rebound on the ground is modeled by changes in the velocities of the gravity center of the block due to impact. Thus, the use of a DTM with resolution smaller than the block size might have little relevance while increasing computational burden. The objective of this work is to investigate the issue of scale relevance with simulations based on RockyFor3D in order to derive guidelines for roughness estimation by field experts. First a sensitivity analysis is performed to identify the combinations of parameters (slope, soil roughness parameter, rock size) where the roughness values have a critical effect on rock propagation on a regular hillside. Second, a more complex hillside is simulated by combining three components: a) a global trend (planar surface), b) local systematic components (sine waves), c) random roughness (Gaussian, zero-mean noise). The parameters for simulating these components are estimated for three typical scenarios of rockfall terrains: soft soil, fine scree and coarse scree, based on expert knowledge and available airborne and terrestrial laser scanning data. For each scenario, the reference terrain is created and used to compute input data for RockyFor3D simulations at different scales, i.e. DTMs with resolutions from 0.5 m to 20 m and associated roughness parameters. Subsequent analysis mainly focuses on the sensitivity of simulations both in terms of run-out envelope and kinetic energy distribution. Guidelines drawn from the results are expected to help experts handle the scale issue while integrating remote sensing data and field measurements of roughness in rockfall simulations.

  13. A numerical model of a HIL scaled roller rig for simulation of wheel-rail degraded adhesion condition

    NASA Astrophysics Data System (ADS)

    Conti, Roberto; Meli, Enrico; Pugi, Luca; Malvezzi, Monica; Bartolini, Fabio; Allotta, Benedetto; Rindi, Andrea; Toni, Paolo

    2012-05-01

    Scaled roller rigs used for railway applications play a fundamental role in the development of new technologies and new devices, combining the hardware in the loop (HIL) benefits with the reduction of the economic investments. The main problem of the scaled roller rig with respect to the full scale ones is the improved complexity due to the scaling factors. For this reason, before building the test rig, the development of a software model of the HIL system can be useful to analyse the system behaviour in different operative conditions. One has to consider the multi-body behaviour of the scaled roller rig, the controller and the model of the virtual vehicle, whose dynamics has to be reproduced on the rig. The main purpose of this work is the development of a complete model that satisfies the previous requirements and in particular the performance analysis of the controller and of the dynamical behaviour of the scaled roller rig when some disturbances are simulated with low adhesion conditions. Since the scaled roller rig will be used to simulate degraded adhesion conditions, accurate and realistic wheel-roller contact model also has to be included in the model. The contact model consists of two parts: the contact point detection and the adhesion model. The first part is based on a numerical method described in some previous studies for the wheel-rail case and modified to simulate the three-dimensional contact between revolute surfaces (wheel-roller). The second part consists in the evaluation of the contact forces by means of the Hertz theory for the normal problem and the Kalker theory for the tangential problem. Some numerical tests were performed, in particular low adhesion conditions were simulated, and bogie hunting and dynamical imbalance of the wheelsets were introduced. The tests were devoted to verify the robustness of control system with respect to some of the more frequent disturbances that may influence the roller rig dynamics. In particular we verified that the wheelset imbalance could significantly influence system performance, and to reduce the effect of this disturbance a multistate filter was designed.

  14. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  15. Turbulence in simulated H II regions

    NASA Astrophysics Data System (ADS)

    Medina, S.-N. X.; Arthur, S. J.; Henney, W. J.; Mellema, G.; Gazol, A.

    2014-12-01

    We investigate the scale dependence of fluctuations inside a realistic model of an evolving turbulent H II region and to what extent these may be studied observationally. We find that the multiple scales of energy injection from champagne flows and the photoionization of clumps and filaments leads to a flatter spectrum of fluctuations than would be expected from top-down turbulence driven at the largest scales. The traditional structure function approach to the observational study of velocity fluctuations is shown to be incapable of reliably determining the velocity power spectrum of our simulation. We find that a more promising approach is the Velocity Channel Analysis technique of Lazarian & Pogosyan (2000), which, despite being intrinsically limited by thermal broadening, can successfully recover the logarithmic slope of the velocity power spectrum to a precision of ±0.1 from high-resolution optical emission-line spectroscopy.

  16. Bridging scales through multiscale modeling: a case study on protein kinase A.

    PubMed

    Boras, Britton W; Hirakis, Sophia P; Votapka, Lane W; Malmstrom, Robert D; Amaro, Rommie E; McCulloch, Andrew D

    2015-01-01

    The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained from various scales and analysis approaches, such as atomistic molecular models, Markov state models (MSM), subcellular network models, and whole cell models. In this paper, we use protein kinase A (PKA) activation as a case study to explore how computational methods that model different physical scales can complement each other and integrate into an improved multiscale representation of the biological mechanisms. Using measured crystal structures, we show how molecular dynamics (MD) simulations coupled with atomic-scale MSMs can provide conformations for Brownian dynamics (BD) simulations to feed transitional states and kinetic parameters into protein-scale MSMs. We discuss how milestoning can give reaction probabilities and forward-rate constants of cAMP association events by seamlessly integrating MD and BD simulation scales. These rate constants coupled with MSMs provide a robust representation of the free energy landscape, enabling access to kinetic, and thermodynamic parameters unavailable from current experimental data. These approaches have helped to illuminate the cooperative nature of PKA activation in response to distinct cAMP binding events. Collectively, this approach exemplifies a general strategy for multiscale model development that is applicable to a wide range of biological problems.

  17. Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.

    PubMed

    Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang

    2018-02-24

    This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.

  18. Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application

    DTIC Science & Technology

    2016-02-26

    Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in

  19. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Application of the stochastic parallel gradient descent algorithm for numerical simulation and analysis of the coherent summation of radiation from fibre amplifiers

    NASA Astrophysics Data System (ADS)

    Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin

    2009-10-01

    Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.

  20. Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids

    NASA Astrophysics Data System (ADS)

    Sezer-Uzol, Nilay

    In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.

  1. Grand Minima and Equatorward Propagation in a Cycling Stellar Convective Dynamo

    NASA Astrophysics Data System (ADS)

    Augustson, Kyle; Brun, Allan Sacha; Miesch, Mark; Toomre, Juri

    2015-08-01

    The 3D MHD Anelastic Spherical Harmonic code, using slope-limited diffusion, is employed to capture convective and dynamo processes achieved in a global-scale stellar convection simulation for a model solar-mass star rotating at three times the solar rate. The dynamo-generated magnetic fields possesses many timescales, with a prominent polarity cycle occurring roughly every 6.2 years. The magnetic field forms large-scale toroidal wreaths, whose formation is tied to the low Rossby number of the convection in this simulation. The polarity reversals are linked to the weakened differential rotation and a resistive collapse of the large-scale magnetic field. An equatorial migration of the magnetic field is seen, which is due to the strong modulation of the differential rotation rather than a dynamo wave. A poleward migration of magnetic flux from the equator eventually leads to the reversal of the polarity of the high-latitude magnetic field. This simulation also enters an interval with reduced magnetic energy at low latitudes lasting roughly 16 years (about 2.5 polarity cycles), during which the polarity cycles are disrupted and after which the dynamo recovers its regular polarity cycles. An analysis of this grand minimum reveals that it likely arises through the interplay of symmetric and antisymmetric dynamo families. This intermittent dynamo state potentially results from the simulation’s relatively low magnetic Prandtl number. A mean-field-based analysis of this dynamo simulation demonstrates that it is of the α-Ω type. The timescales that appear to be relevant to the magnetic polarity reversal are also identified.

  2. GRAND MINIMA AND EQUATORWARD PROPAGATION IN A CYCLING STELLAR CONVECTIVE DYNAMO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustson, Kyle; Miesch, Mark; Brun, Allan Sacha

    2015-08-20

    The 3D MHD Anelastic Spherical Harmonic code, using slope-limited diffusion, is employed to capture convective and dynamo processes achieved in a global-scale stellar convection simulation for a model solar-mass star rotating at three times the solar rate. The dynamo-generated magnetic fields possesses many timescales, with a prominent polarity cycle occurring roughly every 6.2 years. The magnetic field forms large-scale toroidal wreaths, whose formation is tied to the low Rossby number of the convection in this simulation. The polarity reversals are linked to the weakened differential rotation and a resistive collapse of the large-scale magnetic field. An equatorial migration of themore » magnetic field is seen, which is due to the strong modulation of the differential rotation rather than a dynamo wave. A poleward migration of magnetic flux from the equator eventually leads to the reversal of the polarity of the high-latitude magnetic field. This simulation also enters an interval with reduced magnetic energy at low latitudes lasting roughly 16 years (about 2.5 polarity cycles), during which the polarity cycles are disrupted and after which the dynamo recovers its regular polarity cycles. An analysis of this grand minimum reveals that it likely arises through the interplay of symmetric and antisymmetric dynamo families. This intermittent dynamo state potentially results from the simulation’s relatively low magnetic Prandtl number. A mean-field-based analysis of this dynamo simulation demonstrates that it is of the α-Ω type. The timescales that appear to be relevant to the magnetic polarity reversal are also identified.« less

  3. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection

    PubMed Central

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290

  4. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    PubMed

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  5. Simulation model of a variable-speed pumped-storage power plant in unstable operating conditions in pumping mode

    NASA Astrophysics Data System (ADS)

    Martínez-Lucas, G.; Pérez-Díaz, J. I.; Sarasúa, J. I.; Cavazzini, G.; Pavesi, G.; Ardizzon, G.

    2017-04-01

    This paper presents a dynamic simulation model of a laboratory-scale pumped-storage power plant (PSPP) operating in pumping mode with variable speed. The model considers the dynamic behavior of the conduits by means of an elastic water column approach, and synthetically generates both pressure and torque pulsations that reproduce the operation of the hydraulic machine in its instability region. The pressure and torque pulsations are generated each from a different set of sinusoidal functions. These functions were calibrated from the results of a CFD model, which was in turn validated from experimental data. Simulation model results match the numerical results of the CFD model with reasonable accuracy. The pump-turbine model (the functions used to generate pressure and torque pulsations inclusive) was up-scaled by hydraulic similarity according to the design parameters of a real PSPP and included in a dynamic simulation model of the said PSPP. Preliminary conclusions on the impact of unstable operation conditions on the penstock fatigue were obtained by means of a Monte Carlo simulation-based fatigue analysis.

  6. A Compact Synchronous Cellular Model of Nonlinear Calcium Dynamics: Simulation and FPGA Synthesis Results.

    PubMed

    Soleimani, Hamid; Drakakis, Emmanuel M

    2017-06-01

    Recent studies have demonstrated that calcium is a widespread intracellular ion that controls a wide range of temporal dynamics in the mammalian body. The simulation and validation of such studies using experimental data would benefit from a fast large scale simulation and modelling tool. This paper presents a compact and fully reconfigurable cellular calcium model capable of mimicking Hopf bifurcation phenomenon and various nonlinear responses of the biological calcium dynamics. The proposed cellular model is synthesized on a digital platform for a single unit and a network model. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed cellular model can mimic the biological calcium behaviors with considerably low hardware overhead. The approach has the potential to speed up large-scale simulations of slow intracellular dynamics by sharing more cellular units in real-time. To this end, various networks constructed by pipelining 10 k to 40 k cellular calcium units are compared with an equivalent simulation run on a standard PC workstation. Results show that the cellular hardware model is, on average, 83 times faster than the CPU version.

  7. A hybrid parallel framework for the cellular Potts model simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yi; He, Kejing; Dong, Shoubin

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less

  8. Ignition sensitivity study of an energetic train configuration using experiments and simulation

    NASA Astrophysics Data System (ADS)

    Kim, Bohoon; Yu, Hyeonju; Yoh, Jack J.

    2018-06-01

    A full scale hydrodynamic simulation intended for the accurate description of shock-induced detonation transition was conducted as a part of an ignition sensitivity analysis of an energetic component system. The system is composed of an exploding foil initiator (EFI), a donor explosive unit, a stainless steel gap, and an acceptor explosive. A series of velocity interferometer system for any reflector measurements were used to validate the hydrodynamic simulations based on the reactive flow model that describes the initiation of energetic materials arranged in a train configuration. A numerical methodology with ignition and growth mechanisms for tracking multi-material boundary interactions as well as severely transient fluid-structure coupling between high explosive charges and metal gap is described. The free surface velocity measurement is used to evaluate the sensitivity of energetic components that are subjected to strong pressure waves. Then, the full scale hydrodynamic simulation is performed on the flyer impacted initiation of an EFI driven pyrotechnical system.

  9. Scaled-down particle-in-cell simulation of cathode plasma expansion in magnetically insulated coaxial diode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Danni; Zhang, Jun, E-mail: zhangjun@nudt.edu.cn; Zhong, Huihuang

    2016-03-15

    The expansion of cathode plasma in magnetically insulated coaxial diode (MICD) is investigated in theory and particle-in-cell (PIC) simulation. The temperature and density of the cathode plasma are about several eV and 10{sup 13}–10{sup 16 }cm{sup −3}, respectively, and its expansion velocity is of the level of few cm/μs. Through hydrodynamic theory analysis, expressions of expansion velocities in axial and radial directions are obtained. The characteristics of cathode plasma expansion have been simulated through scaled-down PIC models. Simulation results indicate that the expansion velocity is dominated by the ratio of plasma density other than the static electric field. The electric fieldmore » counteracts the plasma expansion reverse of it. The axial guiding magnetic field only reduces the radial transport coefficients by a correction factor, but not the axial ones. Both the outward and inward radial expansions of a MICD are suppressed by the much stronger guiding magnetic field and even cease.« less

  10. Multiscale Analysis of Rapidly Rotating Dynamo Simulations

    NASA Astrophysics Data System (ADS)

    Orvedahl, Ryan; Calkins, Michael; Featherstone, Nicholas

    2017-11-01

    The magnetic field of the planets and stars are generated by dynamo action in their electrically conducting fluid interiors. Numerical models of this process solve the fundamental equations of magnetohydrodynamics driven by convection in a rotating spherical shell. Rotation plays an important role in modifying the resulting convective flows and the self-generated magnetic field. We present results of simulating rapidly rotating systems that are unstable to dynamo action. We use the pseudo-spectral code Rayleigh to generate a suite of direct numerical simulations. Each simulation uses the Boussinesq approximation and is characterized by an Ekman number (Ek = ν / ΩL2) of 10-5. We vary the degree of convective forcing to obtain a range of convective Rossby numbers. The resulting flows and magnetic structures are analyzed using a Reynolds decomposition. We determine the relative importance of each term in the scale-separated governing equations and estimate the relevant spatial scales responsible for generating the mean magnetic field.

  11. Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis

    NASA Astrophysics Data System (ADS)

    Springer, Everett P.; Cundy, Terrance W.

    1987-02-01

    Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.

  12. Effects of biases in domain wall network evolution. II. Quantitative analysis

    NASA Astrophysics Data System (ADS)

    Correia, J. R. C. C. C.; Leite, I. S. C. R.; Martins, C. J. A. P.

    2018-04-01

    Domain walls form at phase transitions which break discrete symmetries. In a cosmological context, they often overclose the Universe (contrary to observational evidence), although one may prevent this by introducing biases or forcing anisotropic evolution of the walls. In a previous work [Correia et al., Phys. Rev. D 90, 023521 (2014), 10.1103/PhysRevD.90.023521], we numerically studied the evolution of various types of biased domain wall networks in the early Universe, confirming that anisotropic networks ultimately reach scaling while those with a biased potential or biased initial conditions decay. We also found that the analytic decay law obtained by Hindmarsh was in good agreement with simulations of biased potentials, but not of biased initial conditions, and suggested that the difference was related to the Gaussian approximation underlying the analytic law. Here, we extend our previous work in several ways. For the cases of biased potential and biased initial conditions, we study in detail the field distributions in the simulations, confirming that the validity (or not) of the Gaussian approximation is the key difference between the two cases. For anisotropic walls, we carry out a more extensive set of numerical simulations and compare them to the canonical velocity-dependent one-scale model for domain walls, finding that the model accurately predicts the linear scaling regime after isotropization. Overall, our analysis provides a quantitative description of the cosmological evolution of these networks.

  13. On the feasibility of using satellite gravity observations for detecting large-scale solid mass transfer events

    NASA Astrophysics Data System (ADS)

    Peidou, Athina C.; Fotopoulos, Georgia; Pagiatakis, Spiros

    2017-10-01

    The main focus of this paper is to assess the feasibility of utilizing dedicated satellite gravity missions in order to detect large-scale solid mass transfer events (e.g. landslides). Specifically, a sensitivity analysis of Gravity Recovery and Climate Experiment (GRACE) gravity field solutions in conjunction with simulated case studies is employed to predict gravity changes due to past subaerial and submarine mass transfer events, namely the Agulhas slump in southeastern Africa and the Heart Mountain Landslide in northwestern Wyoming. The detectability of these events is evaluated by taking into account the expected noise level in the GRACE gravity field solutions and simulating their impact on the gravity field through forward modelling of the mass transfer. The spectral content of the estimated gravity changes induced by a simulated large-scale landslide event is estimated for the known spatial resolution of the GRACE observations using wavelet multiresolution analysis. The results indicate that both the Agulhas slump and the Heart Mountain Landslide could have been detected by GRACE, resulting in {\\vert }0.4{\\vert } and {\\vert }0.18{\\vert } mGal change on GRACE solutions, respectively. The suggested methodology is further extended to the case studies of the submarine landslide in Tohoku, Japan, and the Grand Banks landslide in Newfoundland, Canada. The detectability of these events using GRACE solutions is assessed through their impact on the gravity field.

  14. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  15. Simulation of Shock-Shock Interaction in Parsec-Scale Jets

    NASA Astrophysics Data System (ADS)

    Fromm, Christian M.; Perucho, Manel; Ros, Eduardo; Mimica, Petar; Savolainen, Tuomas; Lobanov, Andrei P.; Zensus, J. Anton

    The analysis of the radio light curves of the blazar CTA 102 during its 2006 flare revealed a possible interaction between a standing shock wave and a traveling one. In order to better understand this highly non-linear process, we used a relativistic hydrodynamic code to simulate the high energy interaction and its related emission. The calculated synchrotron emission from these simulations showed an increase in turnover flux density, Sm, and turnover frequency, νm, during the interaction and decrease to its initial values after the passage of the traveling shock wave.

  16. Substructured multibody molecular dynamics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grest, Gary Stephen; Stevens, Mark Jackson; Plimpton, Steven James

    2006-11-01

    We have enhanced our parallel molecular dynamics (MD) simulation software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator, lammps.sandia.gov) to include many new features for accelerated simulation including articulated rigid body dynamics via coupling to the Rensselaer Polytechnic Institute code POEMS (Parallelizable Open-source Efficient Multibody Software). We use new features of the LAMMPS software package to investigate rhodopsin photoisomerization, and water model surface tension and capillary waves at the vapor-liquid interface. Finally, we motivate the recipes of MD for practitioners and researchers in numerical analysis and computational mechanics.

  17. Isentropic Analysis of a Simulated Hurricane

    NASA Technical Reports Server (NTRS)

    Mrowiec, Agnieszka A.; Pauluis, Olivier; Zhang, Fuqing

    2016-01-01

    Hurricanes, like many other atmospheric flows, are associated with turbulent motions over a wide range of scales. Here the authors adapt a new technique based on the isentropic analysis of convective motions to study the thermodynamic structure of the overturning circulation in hurricane simulations. This approach separates the vertical mass transport in terms of the equivalent potential temperature of air parcels. In doing so, one separates the rising air parcels at high entropy from the subsiding air at low entropy. This technique filters out oscillatory motions associated with gravity waves and separates convective overturning from the secondary circulation. This approach is applied here to study the flow of an idealized hurricane simulation with the Weather Research and Forecasting (WRF) Model. The isentropic circulation for a hurricane exhibits similar characteristics to that of moist convection, with a maximum mass transport near the surface associated with a shallow convection and entrainment. There are also important differences. For instance, ascent in the eyewall can be readily identified in the isentropic analysis as an upward mass flux of air with unusually high equivalent potential temperature. The isentropic circulation is further compared here to the Eulerian secondary circulation of the simulated hurricane to show that the mass transport in the isentropic circulation is much larger than the one in secondary circulation. This difference can be directly attributed to the mass transport by convection in the outer rainband and confirms that, even for a strongly organized flow like a hurricane, most of the atmospheric overturning is tied to the smaller scales.

  18. Effective particle size from molecular dynamics simulations in fluids

    NASA Astrophysics Data System (ADS)

    Ju, Jianwei; Welch, Paul M.; Rasmussen, Kim Ø.; Redondo, Antonio; Vorobieff, Peter; Kober, Edward M.

    2018-04-01

    We report molecular dynamics simulations designed to investigate the effective size of colloidal particles suspended in a fluid in the vicinity of a rigid wall where all interactions are defined by smooth atomic potential functions. These simulations are used to assess how the behavior of this system at the atomistic length scale compares to continuum mechanics models. In order to determine the effective size of the particles, we calculate the solvent forces on spherical particles of different radii as a function of different positions near and overlapping with the atomistically defined wall and compare them to continuum models. This procedure also then determines the effective position of the wall. Our analysis is based solely on forces that the particles sense, ensuring self-consistency of the method. The simulations were carried out using both Weeks-Chandler-Andersen and modified Lennard-Jones (LJ) potentials to identify the different contributions of simple repulsion and van der Waals attractive forces. Upon correction for behavior arising the discreteness of the atomic system, the underlying continuum physics analysis appeared to be correct down to much less than the particle radius. For both particle types, the effective radius was found to be ˜ 0.75σ , where σ defines the length scale of the force interaction (the LJ diameter). The effective "hydrodynamic" radii determined by this means are distinct from commonly assumed values of 0.5σ and 1.0σ , but agree with a value developed from the atomistic analysis of the viscosity of such systems.

  19. Analysis of the laser ignition of methane/oxygen mixtures in a sub-scale rocket combustion chamber

    NASA Astrophysics Data System (ADS)

    Wohlhüter, Michael; Zhukov, Victor P.; Sender, Joachim; Schlechtriem, Stefan

    2017-06-01

    The laser ignition of methane/oxygen mixtures in a sub-scale rocket combustion chamber has been investigated numerically and experimentally. The ignition test case used in the present paper was generated during the In-Space Propulsion project (ISP-1), a project focused on the operation of propulsion systems in space, the handling of long idle periods between operations, and multiple reignitions under space conditions. Regarding the definition of the numerical simulation and the suitable domain for the current model, 2D and 3D simulations have been performed. Analysis shows that the usage of a 2D geometry is not suitable for this type of simulation, as the reduction of the geometry to a 2D domain significantly changes the conditions at the time of ignition and subsequently the flame development. The comparison of the numerical and experimental results shows a strong discrepancy in the pressure evolution and the combustion chamber pressure peak following the laser spark. The detailed analysis of the optical Schlieren and OH data leads to the conclusion that the pressure measurement system was not able to capture the strong pressure increase and the peak value in the combustion chamber during ignition. Although the timing in flame development following the laser spark is not captured appropriately, the 3D simulations reproduce the general ignition phenomena observed in the optical measurement systems, such as pressure evolution and injector flow characteristics.

  20. Effective particle size from molecular dynamics simulations in fluids

    NASA Astrophysics Data System (ADS)

    Ju, Jianwei; Welch, Paul M.; Rasmussen, Kim Ø.; Redondo, Antonio; Vorobieff, Peter; Kober, Edward M.

    2017-12-01

    We report molecular dynamics simulations designed to investigate the effective size of colloidal particles suspended in a fluid in the vicinity of a rigid wall where all interactions are defined by smooth atomic potential functions. These simulations are used to assess how the behavior of this system at the atomistic length scale compares to continuum mechanics models. In order to determine the effective size of the particles, we calculate the solvent forces on spherical particles of different radii as a function of different positions near and overlapping with the atomistically defined wall and compare them to continuum models. This procedure also then determines the effective position of the wall. Our analysis is based solely on forces that the particles sense, ensuring self-consistency of the method. The simulations were carried out using both Weeks-Chandler-Andersen and modified Lennard-Jones (LJ) potentials to identify the different contributions of simple repulsion and van der Waals attractive forces. Upon correction for behavior arising the discreteness of the atomic system, the underlying continuum physics analysis appeared to be correct down to much less than the particle radius. For both particle types, the effective radius was found to be ˜ 0.75σ , where σ defines the length scale of the force interaction (the LJ diameter). The effective "hydrodynamic" radii determined by this means are distinct from commonly assumed values of 0.5σ and 1.0σ , but agree with a value developed from the atomistic analysis of the viscosity of such systems.

  1. Modified smoothed particle hydrodynamics (MSPH) for the analysis of centrifugally assisted TiC-Fe-Al2O3 combustion synthesis

    NASA Astrophysics Data System (ADS)

    Hassan, M. A.; Mahmoodian, Reza; Hamdi, M.

    2014-01-01

    A modified smoothed particle hydrodynamic (MSPH) computational technique was utilized to simulate molten particle motion and infiltration speed on multi-scale analysis levels. The radial velocity and velocity gradient of molten alumina, iron infiltration in the TiC product and solidification rate, were predicted during centrifugal self-propagating high-temperature synthesis (SHS) simulation, which assisted the coating process by MSPH. The effects of particle size and temperature on infiltration and solidification of iron and alumina were mainly investigated. The obtained results were validated with experimental microstructure evidence. The simulation model successfully describes the magnitude of iron and alumina diffusion in a centrifugal thermite SHS and Ti + C hybrid reaction under centrifugal acceleration.

  2. Modified smoothed particle hydrodynamics (MSPH) for the analysis of centrifugally assisted TiC-Fe-Al2O3 combustion synthesis

    PubMed Central

    Hassan, M. A.; Mahmoodian, Reza; Hamdi, M.

    2014-01-01

    A modified smoothed particle hydrodynamic (MSPH) computational technique was utilized to simulate molten particle motion and infiltration speed on multi-scale analysis levels. The radial velocity and velocity gradient of molten alumina, iron infiltration in the TiC product and solidification rate, were predicted during centrifugal self-propagating high-temperature synthesis (SHS) simulation, which assisted the coating process by MSPH. The effects of particle size and temperature on infiltration and solidification of iron and alumina were mainly investigated. The obtained results were validated with experimental microstructure evidence. The simulation model successfully describes the magnitude of iron and alumina diffusion in a centrifugal thermite SHS and Ti + C hybrid reaction under centrifugal acceleration. PMID:24430621

  3. Modified smoothed particle hydrodynamics (MSPH) for the analysis of centrifugally assisted TiC-Fe-Al2O3 combustion synthesis.

    PubMed

    Hassan, M A; Mahmoodian, Reza; Hamdi, M

    2014-01-16

    A modified smoothed particle hydrodynamic (MSPH) computational technique was utilized to simulate molten particle motion and infiltration speed on multi-scale analysis levels. The radial velocity and velocity gradient of molten alumina, iron infiltration in the TiC product and solidification rate, were predicted during centrifugal self-propagating high-temperature synthesis (SHS) simulation, which assisted the coating process by MSPH. The effects of particle size and temperature on infiltration and solidification of iron and alumina were mainly investigated. The obtained results were validated with experimental microstructure evidence. The simulation model successfully describes the magnitude of iron and alumina diffusion in a centrifugal thermite SHS and Ti + C hybrid reaction under centrifugal acceleration.

  4. Simulating large-scale crop yield by using perturbed-parameter ensemble method

    NASA Astrophysics Data System (ADS)

    Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.

    2010-12-01

    Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.

  5. PRANAS: A New Platform for Retinal Analysis and Simulation.

    PubMed

    Cessac, Bruno; Kornprobst, Pierre; Kraria, Selim; Nasser, Hassan; Pamplona, Daniela; Portelli, Geoffrey; Viéville, Thierry

    2017-01-01

    The retina encodes visual scenes by trains of action potentials that are sent to the brain via the optic nerve. In this paper, we describe a new free access user-end software allowing to better understand this coding. It is called PRANAS (https://pranas.inria.fr), standing for Platform for Retinal ANalysis And Simulation. PRANAS targets neuroscientists and modelers by providing a unique set of retina-related tools. PRANAS integrates a retina simulator allowing large scale simulations while keeping a strong biological plausibility and a toolbox for the analysis of spike train population statistics. The statistical method (entropy maximization under constraints) takes into account both spatial and temporal correlations as constraints, allowing to analyze the effects of memory on statistics. PRANAS also integrates a tool computing and representing in 3D (time-space) receptive fields. All these tools are accessible through a friendly graphical user interface. The most CPU-costly of them have been implemented to run in parallel.

  6. Multidisciplinary tailoring of hot composite structures

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.; Chamis, Christos C.

    1993-01-01

    A computational simulation procedure is described for multidisciplinary analysis and tailoring of layered multi-material hot composite engine structural components subjected to simultaneous multiple discipline-specific thermal, structural, vibration, and acoustic loads. The effect of aggressive environments is also simulated. The simulation is based on a three-dimensional finite element analysis technique in conjunction with structural mechanics codes, thermal/acoustic analysis methods, and tailoring procedures. The integrated multidisciplinary simulation procedure is general-purpose including the coupled effects of nonlinearities in structure geometry, material, loading, and environmental complexities. The composite material behavior is assessed at all composite scales, i.e., laminate/ply/constituents (fiber/matrix), via a nonlinear material characterization hygro-thermo-mechanical model. Sample tailoring cases exhibiting nonlinear material/loading/environmental behavior of aircraft engine fan blades, are presented. The various multidisciplinary loads lead to different tailored designs, even those competing with each other, as in the case of minimum material cost versus minimum structure weight and in the case of minimum vibration frequency versus minimum acoustic noise.

  7. Nationwide Buildings Energy Research enabled through an integrated Data Intensive Scientific Workflow and Advanced Analysis Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleese van Dam, Kerstin; Lansing, Carina S.; Elsethagen, Todd O.

    2014-01-28

    Modern workflow systems enable scientists to run ensemble simulations at unprecedented scales and levels of complexity, allowing them to study system sizes previously impossible to achieve, due to the inherent resource requirements needed for the modeling work. However as a result of these new capabilities the science teams suddenly also face unprecedented data volumes that they are unable to analyze with their existing tools and methodologies in a timely fashion. In this paper we will describe the ongoing development work to create an integrated data intensive scientific workflow and analysis environment that offers researchers the ability to easily create andmore » execute complex simulation studies and provides them with different scalable methods to analyze the resulting data volumes. The integration of simulation and analysis environments is hereby not only a question of ease of use, but supports fundamental functions in the correlated analysis of simulation input, execution details and derived results for multi-variant, complex studies. To this end the team extended and integrated the existing capabilities of the Velo data management and analysis infrastructure, the MeDICi data intensive workflow system and RHIPE the R for Hadoop version of the well-known statistics package, as well as developing a new visual analytics interface for the result exploitation by multi-domain users. The capabilities of the new environment are demonstrated on a use case that focusses on the Pacific Northwest National Laboratory (PNNL) building energy team, showing how they were able to take their previously local scale simulations to a nationwide level by utilizing data intensive computing techniques not only for their modeling work, but also for the subsequent analysis of their modeling results. As part of the PNNL research initiative PRIMA (Platform for Regional Integrated Modeling and Analysis) the team performed an initial 3 year study of building energy demands for the US Eastern Interconnect domain, which they are now planning to extend to predict the demand for the complete century. The initial study raised their data demands from a few GBs to 400GB for the 3year study and expected tens of TBs for the full century.« less

  8. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  9. Thin film growth by 3D multi-particle diffusion limited aggregation model: Anomalous roughening and fractal analysis

    NASA Astrophysics Data System (ADS)

    Nasehnejad, Maryam; Nabiyouni, G.; Gholipour Shahraki, Mehran

    2018-03-01

    In this study a 3D multi-particle diffusion limited aggregation method is employed to simulate growth of rough surfaces with fractal behavior in electrodeposition process. A deposition model is used in which the radial motion of the particles with probability P, competes with random motions with probability 1 - P. Thin films growth is simulated for different values of probability P (related to the electric field) and thickness of the layer(related to the number of deposited particles). The influence of these parameters on morphology, kinetic of roughening and the fractal dimension of the simulated surfaces has been investigated. The results show that the surface roughness increases with increasing the deposition time and scaling exponents exhibit a complex behavior which is called as anomalous scaling. It seems that in electrodeposition process, radial motion of the particles toward the growing seeds may be an important mechanism leading to anomalous scaling. The results also indicate that the larger values of probability P, results in smoother topography with more densely packed structure. We have suggested a dynamic scaling ansatz for interface width has a function of deposition time, scan length and probability. Two different methods are employed to evaluate the fractal dimension of the simulated surfaces which are "cube counting" and "roughness" methods. The results of both methods show that by increasing the probability P or decreasing the deposition time, the fractal dimension of the simulated surfaces is increased. All gained values for fractal dimensions are close to 2.5 in the diffusion limited aggregation model.

  10. Rail passenger equipment collision tests : analysis of occupant protection measurements

    DOT National Transportation Integrated Search

    2000-01-01

    The Federal Railroad Administration has been conducting research : on occupant protection in train collisions. As part of this research, : computer simulations have been performed, passenger seats have been sled tested, and two full-scale collision t...

  11. Enhanced sampling by multiple molecular dynamics trajectories: carbonmonoxy myoglobin 10 micros A0-->A(1-3) transition from ten 400 picosecond simulations.

    PubMed

    Loccisano, Anne E; Acevedo, Orlando; DeChancie, Jason; Schulze, Brita G; Evanseck, Jeffrey D

    2004-05-01

    The utility of multiple trajectories to extend the time scale of molecular dynamics simulations is reported for the spectroscopic A-states of carbonmonoxy myoglobin (MbCO). Experimentally, the A0-->A(1-3) transition has been observed to be 10 micros at 300 K, which is beyond the time scale of standard molecular dynamics simulations. To simulate this transition, 10 short (400 ps) and two longer time (1.2 ns) molecular dynamics trajectories, starting from five different crystallographic and solution phase structures with random initial velocities centered in a 37 A radius sphere of water, have been used to sample the native-fold of MbCO. Analysis of the ensemble of structures gathered over the cumulative 5.6 ns reveals two biomolecular motions involving the side chains of His64 and Arg45 to explain the spectroscopic states of MbCO. The 10 micros A0-->A(1-3) transition involves the motion of His64, where distance between His64 and CO is found to vary up to 8.8 +/- 1.0 A during the transition of His64 from the ligand (A(1-3)) to bulk solvent (A0). The His64 motion occurs within a single trajectory only once, however the multiple trajectories populate the spectroscopic A-states fully. Consequently, multiple independent molecular dynamics simulations have been found to extend biomolecular motion from 5 ns of total simulation to experimental phenomena on the microsecond time scale.

  12. Entry, Descent and Landing Systems Analysis: Exploration Class Simulation Overview and Results

    NASA Technical Reports Server (NTRS)

    DwyerCianciolo, Alicia M.; Davis, Jody L.; Shidner, Jeremy D.; Powell, Richard W.

    2010-01-01

    NASA senior management commissioned the Entry, Descent and Landing Systems Analysis (EDL-SA) Study in 2008 to identify and roadmap the Entry, Descent and Landing (EDL) technology investments that the agency needed to make in order to successfully land large payloads at Mars for both robotic and exploration or human-scale missions. The year one exploration class mission activity considered technologies capable of delivering a 40-mt payload. This paper provides an overview of the exploration class mission study, including technologies considered, models developed and initial simulation results from the EDL-SA year one effort.

  13. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC.

    PubMed

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R

    2017-07-12

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.

  14. An Analysis of Full Scale Measurements on M/V Stewart J. Cort during the 1979 and 1980 Trial Programs. Parts I and II.

    DTIC Science & Technology

    1982-02-01

    IKCuNITY CLASSIFICATION OF Tm4iS IMAGE (Vrhn Dot& Entered) .,.-’- . . . . . ... .. ... " . . ...... ....... .. . . . . . . . . TABLE OF CONTENTS...11-19 APPENDIX D: BASIC PROCESSING ............................... 11-21 APPENDIX E: SIMULATION OF DATA...equipment previously developed, and an on-board data processing system. These full scale ship trials were the first in history with the objective of directly

  15. Development of Residential Prototype Building Models and Analysis System for Large-Scale Energy Efficiency Studies Using EnergyPlus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendon, Vrushali V.; Taylor, Zachary T.

    ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype buildingmore » models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.« less

  16. On the feeding zone of planetesimal formation by the streaming instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chao-Chin; Johansen, Anders, E-mail: ccyang@astro.lu.se, E-mail: anders@astro.lu.se

    2014-09-10

    The streaming instability is a promising mechanism to overcome the barriers in direct dust growth and lead to the formation of planetesimals. Most previous studies of the streaming instability, however, were focused on a local region of a protoplanetary disk with a limited simulation domain such that only one filamentary concentration of solids has been observed. The characteristic separation between filaments is therefore not known. To address this, we conduct the largest-scale simulations of the streaming instability to date, with computational domains up to 1.6 gas scale heights both horizontally and vertically. The large dynamical range allows the effect ofmore » vertical gas stratification to become prominent. We observe more frequent merging and splitting of filaments in simulation boxes of high vertical extent. We find multiple filamentary concentrations of solids with an average separation of about 0.2 local gas scale heights, much higher than the most unstable wavelength from linear stability analysis. This measures the characteristic separation of planetesimal forming events driven by the streaming instability and thus the initial feeding zone of planetesimals.« less

  17. Achieving bioinspired flapping wing hovering flight solutions on Mars via wing scaling.

    PubMed

    Bluman, James E; Pohly, Jeremy; Sridhar, Madhu; Kang, Chang-Kwon; Landrum, David Brian; Fahimi, Farbod; Aono, Hikaru

    2018-05-29

    Achieving atmospheric flight on Mars is challenging due to the low density of the Martian atmosphere. Aerodynamic forces are proportional to the atmospheric density, which limits the use of conventional aircraft designs on Mars. Here, we show using numerical simulations that a flapping wing robot can fly on Mars via bioinspired dynamic scaling. Trimmed, hovering flight is possible in a simulated Martian environment when dynamic similarity with insects on earth is achieved by preserving the relevant dimensionless parameters while scaling up the wings three to four times its normal size. The analysis is performed using a well-validated two-dimensional Navier-Stokes equation solver, coupled to a three-dimensional flight dynamics model to simulate free flight. The majority of power required is due to the inertia of the wing because of the ultra-low density. The inertial flap power can be substantially reduced through the use of a torsional spring. The minimum total power consumption is 188 W/kg when the torsional spring is driven at its natural frequency. © 2018 IOP Publishing Ltd.

  18. Internal Fluid Dynamics and Frequency Scaling of Sweeping Jet Fluidic Oscillators

    NASA Astrophysics Data System (ADS)

    Seo, Jung Hee; Salazar, Erik; Mittal, Rajat

    2017-11-01

    Sweeping jet fluidic oscillators (SJFOs) are devices that produce a spatially oscillating jet solely based on intrinsic flow instability mechanisms without any moving parts. Recently, SJFOs have emerged as effective actuators for flow control, but the internal fluid dynamics of the device that drives the oscillatory flow mechanism is not yet fully understood. In the current study, the internal fluid dynamics of the fluidic oscillator with feedback channels has been investigated by employing incompressible flow simulations. The study is focused on the oscillation mechanisms and scaling laws that underpin the jet oscillation. Based on the simulation results, simple phenomenological models that connect the jet deflection to the feedback flow are developed. Several geometric modifications are considered in order to explore the characteristic length scales and phase relationships associated with the jet oscillation and to assess the proposed phenomenological model. A scaling law for the jet oscillation frequency is proposed based on the detailed analysis. This research is supported by AFOSR Grant FA9550-14-1-0289 monitored by Dr. Douglas Smith.

  19. A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2000-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.

  20. The Use of Computer Simulation Methods to Reach Data for Economic Analysis of Automated Logistic Systems

    NASA Astrophysics Data System (ADS)

    Neradilová, Hana; Fedorko, Gabriel

    2016-12-01

    Automated logistic systems are becoming more widely used within enterprise logistics processes. Their main advantage is that they allow increasing the efficiency and reliability of logistics processes. In terms of evaluating their effectiveness, it is necessary to take into account the economic aspect of the entire process. However, many users ignore and underestimate this area,which is not correct. One of the reasons why the economic aspect is overlooked is the fact that obtaining information for such an analysis is not easy. The aim of this paper is to present the possibilities of computer simulation methods for obtaining data for full-scale economic analysis implementation.

  1. Comparison of sub-scaled to full-scaled aircrafts in simulation environment for air traffic management

    NASA Astrophysics Data System (ADS)

    Elbakary, Mohamed I.; Iftekharuddin, Khan M.; Papelis, Yiannis; Newman, Brett

    2017-05-01

    Air Traffic Management (ATM) concepts are commonly tested in simulation to obtain preliminary results and validate the concepts before adoption. Recently, the researchers found that simulation is not enough because of complexity associated with ATM concepts. In other words, full-scale tests must eventually take place to provide compelling performance evidence before adopting full implementation. Testing using full-scale aircraft produces a high-cost approach that yields high-confidence results but simulation provides a low-risk/low-cost approach with reduced confidence on the results. One possible approach to increase the confidence of the results and simultaneously reduce the risk and the cost is using unmanned sub-scale aircraft in testing new concepts for ATM. This paper presents the simulation results of using unmanned sub-scale aircraft in implementing ATM concepts compared to the full scale aircraft. The results of simulation show that the performance of sub-scale is quite comparable to that of the full-scale which validates use of the sub-scale in testing new ATM concepts. Keywords: Unmanned

  2. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia

    This paper describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLO formore » structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation« less

  3. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE PAGES

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia; ...

    2017-08-01

    This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less

  4. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia

    This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less

  5. Field-scale simulation of chemical flooding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, N.

    1989-01-01

    A three-dimensional compositional chemical flooding simulator (UTCHEM) has been improved. The new mathematical formulation, boundary conditions, and a description of the physicochemical models of the simulator are presented. This improved simulator has been used for the study of the low tension pilot project at the Big Muddy field near Casper, Wyoming. Both the tracer injection conducted prior to the injection of the chemical slug, and the chemical flooding stages of the pilot project, have been analyzed. Not only the oil recovery but also the tracers, polymer, alcohol and chloride histories have been successfully matched with field results. Simulation results indicatemore » that, for this fresh water reservoir, the salinity gradient during the preflush and the resulting calcium pickup by the surfactant slug played a major role in the success of the project. In addition, analysis of the effects of the crossflow on the performance of the pilot project indicates that, for the well spacing of the pilot, crossflow does not play as important a role as it might for a large-scale project. To improve the numerical efficiency of the simulator, a third order convective differencing scheme has been applied to the simulator. This method can be used with non-uniform mesh, and therefore is suited for simulation studies of large-scale multiwell heterogeneous reservoirs. Comparison of the results with one and two dimensional analytical solutions shows that this method is effective in eliminating numerical dispersion using relatively large grid blocks. Results of one, two and three-dimensional miscible water/tracer flow, water flooding, polymer flooding, and micellar-polymer flooding test problems, and results of grid orientation studies, are presented.« less

  6. Numerical Investigation of Dual-Mode Scramjet Combustor with Large Upstream Interaction

    NASA Technical Reports Server (NTRS)

    Mohieldin, T. O.; Tiwari, S. N.; Reubush, David E. (Technical Monitor)

    2004-01-01

    Dual-mode scramjet combustor configuration with significant upstream interaction is investigated numerically, The possibility of scaling the domain to accelerate the convergence and reduce the computational time is explored. The supersonic combustor configuration was selected to provide an understanding of key features of upstream interaction and to identify physical and numerical issues relating to modeling of dual-mode configurations. The numerical analysis was performed with vitiated air at freestream Math number of 2.5 using hydrogen as the sonic injectant. Results are presented for two-dimensional models and a three-dimensional jet-to-jet symmetric geometry. Comparisons are made with experimental results. Two-dimensional and three-dimensional results show substantial oblique shock train reaching upstream of the fuel injectors. Flow characteristics slow numerical convergence, while the upstream interaction slowly increases with further iterations. As the flow field develops, the symmetric assumption breaks down. A large separation zone develops and extends further upstream of the step. This asymmetric flow structure is not seen in the experimental data. Results obtained using a sub-scale domain (both two-dimensional and three-dimensional) qualitatively recover the flow physics obtained from full-scale simulations. All results show that numerical modeling using a scaled geometry provides good agreement with full-scale numerical results and experimental results for this configuration. This study supports the argument that numerical scaling is useful in simulating dual-mode scramjet combustor flowfields and could provide an excellent convergence acceleration technique for dual-mode simulations.

  7. What FIREs Up Star Formation: the Emergence of the Kennicutt-Schmidt Law from Feedback

    NASA Astrophysics Data System (ADS)

    Orr, Matthew E.; Hayward, Christopher C.; Hopkins, Philip F.; Chan, T. K.; Faucher-Giguère, Claude-André; Feldmann, Robert; Kereš, Dušan; Murray, Norman; Quataert, Eliot

    2018-05-01

    We present an analysis of the global and spatially-resolved Kennicutt-Schmidt (KS) star formation relation in the FIRE (Feedback In Realistic Environments) suite of cosmological simulations, including halos with z = 0 masses ranging from 1010 - 1013 M⊙. We show that the KS relation emerges and is robustly maintained due to the effects of feedback on local scales regulating star-forming gas, independent of the particular small-scale star formation prescriptions employed. We demonstrate that the time-averaged KS relation is relatively independent of redshift and spatial averaging scale, and that the star formation rate surface density is weakly dependent on metallicity and inversely dependent on orbital dynamical time. At constant star formation rate surface density, the `Cold & Dense' gas surface density (gas with T < 300 K and n > 10 cm-3, used as a proxy for the molecular gas surface density) of the simulated galaxies is ˜0.5 dex less than observed at ˜kpc scales. This discrepancy may arise from underestimates of the local column density at the particle-scale for the purposes of shielding in the simulations. Finally, we show that on scales larger than individual giant molecular clouds, the primary condition that determines whether star formation occurs is whether a patch of the galactic disk is thermally Toomre-unstable (not whether it is self-shielding): once a patch can no longer be thermally stabilized against fragmentation, it collapses, becomes self-shielding, cools, and forms stars, regardless of epoch or environment.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  9. New Challenges in Computational Thermal Hydraulics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadigaroglu, George; Lakehal, Djamel

    New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less

  10. An evaluation of noise reduction algorithms for particle-based fluid simulations in multi-scale applications

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.

    2016-11-01

    Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.

  11. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    NASA Astrophysics Data System (ADS)

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  12. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis.

    PubMed

    Chen, Ho-Wen; Chang, Ni-Bin; Chen, Jeng-Chung; Tsai, Shu-Ju

    2010-07-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA)--a production economics tool--to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  13. Optimal spinneret layout in Von Koch curves of fractal theory based needleless electrospinning process

    NASA Astrophysics Data System (ADS)

    Yang, Wenxiu; Liu, Yanbo; Zhang, Ligai; Cao, Hong; Wang, Yang; Yao, Jinbo

    2016-06-01

    Needleless electrospinning technology is considered as a better avenue to produce nanofibrous materials at large scale, and electric field intensity and its distribution play an important role in controlling nanofiber diameter and quality of the nanofibrous web during electrospinning. In the current study, a novel needleless electrospinning method was proposed based on Von Koch curves of Fractal configuration, simulation and analysis on electric field intensity and distribution in the new electrospinning process were performed with Finite element analysis software, Comsol Multiphysics 4.4, based on linear and nonlinear Von Koch fractal curves (hereafter called fractal models). The result of simulation and analysis indicated that Second level fractal structure is the optimal linear electrospinning spinneret in terms of field intensity and uniformity. Further simulation and analysis showed that the circular type of Fractal spinneret has better field intensity and distribution compared to spiral type of Fractal spinneret in the nonlinear Fractal electrospinning technology. The electrospinning apparatus with the optimal Von Koch fractal spinneret was set up to verify the theoretical analysis results from Comsol simulation, achieving more uniform electric field distribution and lower energy cost, compared to the current needle and needleless electrospinning technologies.

  14. Capturing readiness to learn and collaboration as explored with an interprofessional simulation scenario: A mixed-methods research study.

    PubMed

    Rossler, Kelly L; Kimble, Laura P

    2016-01-01

    Didactic lecture does not lend itself to teaching interprofessional collaboration. High-fidelity human patient simulation with a focus on clinical situations/scenarios is highly conducive to interprofessional education. Consequently, a need for research supporting the incorporation of interprofessional education with high-fidelity patient simulation based technology exists. The purpose of this study was to explore readiness for interprofessional learning and collaboration among pre-licensure health professions students participating in an interprofessional education human patient simulation experience. Using a mixed methods convergent parallel design, a sample of 53 pre-licensure health professions students enrolled in nursing, respiratory therapy, health administration, and physical therapy programs within a college of health professions participated in high-fidelity human patient simulation experiences. Perceptions of interprofessional learning and collaboration were measured with the revised Readiness for Interprofessional Learning Scale (RIPLS) and the Health Professional Collaboration Scale (HPCS). Focus groups were conducted during the simulation post-briefing to obtain qualitative data. Statistical analysis included non-parametric, inferential statistics. Qualitative data were analyzed using a phenomenological approach. Pre- and post-RIPLS demonstrated pre-licensure health professions students reported significantly more positive attitudes about readiness for interprofessional learning post-simulation in the areas of team work and collaboration, negative professional identity, and positive professional identity. Post-simulation HPCS revealed pre-licensure nursing and health administration groups reported greater health collaboration during simulation than physical therapy students. Qualitative analysis yielded three themes: "exposure to experiential learning," "acquisition of interactional relationships," and "presence of chronology in role preparation." Quantitative and qualitative data converged around the finding that physical therapy students had less positive perceptions of the experience because they viewed physical therapy practice as occurring one-on-one rather than in groups. Findings support that pre-licensure students are ready to engage in interprofessional education through exposure to an experiential format such as high-fidelity human patient simulation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using a DC load flow approximation). Chapter 9 shows the price results. In contrast to prior market power simulations of these markets, much greater variability in price-cost margins is found when using a realistic model of hourly conditions on such a large network. Chapter 10 shows that the conventional concentration indices (HHIs) are poorly correlated with PCMs. Finally, Chapter 11 proposes that the simulation models are applied to merger analysis and provides two large-scale merger examples. (Abstract shortened by UMI.)

  16. Time-and-Spatially Adapting Simulations for Efficient Dynamic Stall Predictions

    DTIC Science & Technology

    2015-09-01

    Experi- mental Investigation and Fundamental Understand- ing of a Full-Scale Slowed Rotor at High Advance Ratios,” Journal of the American Helicopter ...remains a major roadblock in the design and analysis of conventional rotors as well as new concepts for future vertical lift. Several approaches to...of conventional rotors as well as new concepts for future vertical lift. Several approaches to reduce the cost of these dynamic stall simulations for

  17. Advances and trends in computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1986-01-01

    Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.

  18. A two-scale model for dynamic damage evolution

    NASA Astrophysics Data System (ADS)

    Keita, Oumar; Dascalu, Cristian; François, Bertrand

    2014-03-01

    This paper presents a new micro-mechanical damage model accounting for inertial effect. The two-scale damage model is fully deduced from small-scale descriptions of dynamic micro-crack propagation under tensile loading (mode I). An appropriate micro-mechanical energy analysis is combined with homogenization based on asymptotic developments in order to obtain the macroscopic evolution law for damage. Numerical simulations are presented in order to illustrate the ability of the model to describe known behaviors like size effects for the structural response, strain-rate sensitivity, brittle-ductile transition and wave dispersion.

  19. Energy dispersive X-ray analysis on an absolute scale in scanning transmission electron microscopy.

    PubMed

    Chen, Z; D'Alfonso, A J; Weyland, M; Taplin, D J; Allen, L J; Findlay, S D

    2015-10-01

    We demonstrate absolute scale agreement between the number of X-ray counts in energy dispersive X-ray spectroscopy using an atomic-scale coherent electron probe and first-principles simulations. Scan-averaged spectra were collected across a range of thicknesses with precisely determined and controlled microscope parameters. Ionization cross-sections were calculated using the quantum excitation of phonons model, incorporating dynamical (multiple) electron scattering, which is seen to be important even for very thin specimens. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Assessing self-care and social function using a computer adaptive testing version of the pediatric evaluation of disability inventory.

    PubMed

    Coster, Wendy J; Haley, Stephen M; Ni, Pengsheng; Dumas, Helene M; Fragala-Pinkham, Maria A

    2008-04-01

    To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the self-care and social function scales of the Pediatric Evaluation of Disability Inventory compared with the full-length version of these scales. Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children's homes. Children with disabilities (n=469) and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Not applicable. Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length self-care and social function scales; time (in seconds) to complete assessments and respondent ratings of burden. Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (r range, .94-.99). Using computer simulation of retrospective data, discriminant validity, and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared with over 16 minutes to complete the full-length scales. Self-care and social function score estimates from CAT administration are highly comparable with those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time.

  1. Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlicher, Bob G; Abercrombie, Robert K

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  2. ID201202961, DOE S-124,539, Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  3. Mutated form (G52E) of inactive diphtheria toxin CRM197: molecular simulations clearly display effect of the mutation to NAD binding.

    PubMed

    Salmas, Ramin Ekhteiari; Mestanoglu, Mert; Unlu, Ayhan; Yurtsever, Mine; Durdagi, Serdar

    2016-11-01

    Mutated form (G52E) of diphtheria toxin (DT) CRM197 is an inactive and nontoxic enzyme. Here, we provided a molecular insight using comparative molecular dynamics (MD) simulations to clarify the influence of a single point mutation on overall protein and active-site loop. Post-processing MD analysis (i.e. stability, principal component analysis, hydrogen-bond occupancy, etc.) is carried out on both wild and mutated targets to investigate and to better understand the mechanistic differences of structural and dynamical properties on an atomic scale especially at nicotinamide adenine dinucleotide (NAD) binding site when a single mutation (G52E) happens at the DT. In addition, a docking simulation is performed for wild and mutated forms. The docking scoring analysis and docking poses results revealed that mutant form is not able to properly accommodate the NAD molecule.

  4. Use of simulated evaporation to assess the potential for scale formation during reverse osmosis desalination

    USGS Publications Warehouse

    Huff, G.F.

    2004-01-01

    The tendency of solutes in input water to precipitate efficiency lowering scale deposits on the membranes of reverse osmosis (RO) desalination systems is an important factor in determining the suitability of input water for desalination. Simulated input water evaporation can be used as a technique to quantitatively assess the potential for scale formation in RO desalination systems. The technique was demonstrated by simulating the increase in solute concentrations required to form calcite, gypsum, and amorphous silica scales at 25??C and 40??C from 23 desalination input waters taken from the literature. Simulation results could be used to quantitatively assess the potential of a given input water to form scale or to compare the potential of a number of input waters to form scale during RO desalination. Simulated evaporation of input waters cannot accurately predict the conditions under which scale will form owing to the effects of potentially stable supersaturated solutions, solution velocity, and residence time inside RO systems. However, the simulated scale-forming potential of proposed input waters could be compared with the simulated scale-forming potentials and actual scale-forming properties of input waters having documented operational histories in RO systems. This may provide a technique to estimate the actual performance and suitability of proposed input waters during RO.

  5. Core analysis of heterogeneous rocks using experimental observations and digital whole core simulation

    NASA Astrophysics Data System (ADS)

    Jackson, S. J.; Krevor, S. C.; Agada, S.

    2017-12-01

    A number of studies have demonstrated the prevalent impact that small-scale rock heterogeneity can have on larger scale flow in multiphase flow systems including petroleum production and CO2sequestration. Larger scale modeling has shown that this has a significant impact on fluid flow and is possibly a significant source of inaccuracy in reservoir simulation. Yet no core analysis protocol has been developed that faithfully represents the impact of these heterogeneities on flow functions used in modeling. Relative permeability is derived from core floods performed at conditions with high flow potential in which the impact of capillary heterogeneity is voided. A more accurate representation would be obtained if measurements were made at flow conditions where the impact of capillary heterogeneity on flow is scaled to be representative of the reservoir system. This, however, is generally impractical due to laboratory constraints and the role of the orientation of the rock heterogeneity. We demonstrate a workflow of combined observations and simulations, in which the impact of capillary heterogeneity may be faithfully represented in the derivation of upscaled flow properties. Laboratory measurements that are a variation of conventional protocols are used for the parameterization of an accurate digital rock model for simulation. The relative permeability at the range of capillary numbers relevant to flow in the reservoir is derived primarily from numerical simulations of core floods that include capillary pressure heterogeneity. This allows flexibility in the orientation of the heterogeneity and in the range of flow rates considered. We demonstrate the approach in which digital rock models have been developed alongside core flood observations for three applications: (1) A Bentheimer sandstone with a simple axial heterogeneity to demonstrate the validity and limitations of the approach, (2) a set of reservoir rocks from the Captain sandstone in the UK North Sea targeted for CO2 storage, and for which the use of capillary pressure hysteresis is necessary, and (3) a secondary CO2-EOR production of residual oil from a Berea sandstone with layered heterogeneities. In all cases the incorporation of heterogeneity is shown to be key to the ultimate derivation of flow properties representative of the reservoir system.

  6. The MICE grand challenge lightcone simulation - I. Dark matter clustering

    NASA Astrophysics Data System (ADS)

    Fosalba, P.; Crocce, M.; Gaztañaga, E.; Castander, F. J.

    2015-04-01

    We present a new N-body simulation from the Marenostrum Institut de Ciències de l'Espai (MICE) collaboration, the MICE Grand Challenge (MICE-GC), containing about 70 billion dark matter particles in a (3 Gpc h-1)3 comoving volume. Given its large volume and fine spatial resolution, spanning over five orders of magnitude in dynamic range, it allows an accurate modelling of the growth of structure in the universe from the linear through the highly non-linear regime of gravitational clustering. We validate the dark matter simulation outputs using 3D and 2D clustering statistics, and discuss mass-resolution effects in the non-linear regime by comparing to previous simulations and the latest numerical fits. We show that the MICE-GC run allows for a measurement of the BAO feature with per cent level accuracy and compare it to state-of-the-art theoretical models. We also use sub-arcmin resolution pixelized 2D maps of the dark matter counts in the lightcone to make tomographic analyses in real and redshift space. Our analysis shows the simulation reproduces the Kaiser effect on large scales, whereas we find a significant suppression of power on non-linear scales relative to the real space clustering. We complete our validation by presenting an analysis of the three-point correlation function in this and previous MICE simulations, finding further evidence for mass-resolution effects. This is the first of a series of three papers in which we present the MICE-GC simulation, along with a wide and deep mock galaxy catalogue built from it. This mock is made publicly available through a dedicated web portal, http://cosmohub.pic.es.

  7. Analysis of the Flicker Level Produced by a Fixed-Speed Wind Turbine

    NASA Astrophysics Data System (ADS)

    Suppioni, Vinicius; P. Grilo, Ahda

    2013-10-01

    In this article, the analysis of the flicker emission during continuous operation of a mid-scale fixed-speed wind turbine connected to a distribution system is presented. Flicker emission is investigated based on simulation results, and the dependence of flicker emission on short-circuit capacity, grid impedance angle, mean wind speed, and wind turbulence is analyzed. The simulations were conducted in different programs in order to provide a more realistic wind emulation and detailed model of mechanical and electrical components of the wind turbine. Such aim is accomplished by using FAST (Fatigue, Aerodynamics, Structures, and Turbulence) to simulate the mechanical parts of the wind turbine, Simulink/MatLab to simulate the electrical system, and TurbSim to obtain the wind model. The results show that, even for a small wind generator, the flicker level can limit the wind power capacity installed in a distribution system.

  8. Detecting vortices in superconductors: Extracting one-dimensional topological singularities from a discretized complex scalar field

    DOE PAGES

    Phillips, Carolyn L.; Peterka, Tom; Karpeyev, Dmitry; ...

    2015-02-20

    In type II superconductors, the dynamics of superconducting vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter. Extracting their precise positions and motion from discretized numerical simulation data is an important, but challenging, task. In the past, vortices have mostly been detected by analyzing the magnitude of the complex scalar field representing the order parameter and visualized by corresponding contour plots and isosurfaces. However, these methods, primarily used for small-scale simulations, blur the fine details of the vortices, scale poorly to large-scale simulations, and do not easily enable isolating andmore » tracking individual vortices. In this paper, we present a method for exactly finding the vortex core lines from a complex order parameter field. With this method, vortices can be easily described at a resolution even finer than the mesh itself. The precise determination of the vortex cores allows the interplay of the vortices inside a model superconductor to be visualized in higher resolution than has previously been possible. Finally, by representing the field as the set of vortices, this method also massively reduces the data footprint of the simulations and provides the data structures for further analysis and feature tracking.« less

  9. From crater functions to partial differential equations: a new approach to ion bombardment induced nonequilibrium pattern formation.

    PubMed

    Norris, Scott A; Brenner, Michael P; Aziz, Michael J

    2009-06-03

    We develop a methodology for deriving continuum partial differential equations for the evolution of large-scale surface morphology directly from molecular dynamics simulations of the craters formed from individual ion impacts. Our formalism relies on the separation between the length scale of ion impact and the characteristic scale of pattern formation, and expresses the surface evolution in terms of the moments of the crater function. We demonstrate that the formalism reproduces the classical Bradley-Harper results, as well as ballistic atomic drift, under the appropriate simplifying assumptions. Given an actual set of converged molecular dynamics moments and their derivatives with respect to the incidence angle, our approach can be applied directly to predict the presence and absence of surface morphological instabilities. This analysis represents the first work systematically connecting molecular dynamics simulations of ion bombardment to partial differential equations that govern topographic pattern-forming instabilities.

  10. Wake characteristics of wind turbines in utility-scale wind farms

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolei; Foti, Daniel; Sotiropoulos, Fotis

    2017-11-01

    The dynamics of turbine wakes is affected by turbine operating conditions, ambient atmospheric turbulent flows, and wakes from upwind turbines. Investigations of the wake from a single turbine have been extensively carried out in the literature. Studies on the wake dynamics in utility-scale wind farms are relatively limited. In this work, we employ large-eddy simulation with an actuator surface or actuator line model for turbine blades to investigate the wake dynamics in utility-scale wind farms. Simulations of three wind farms, i.e., the Horns Rev wind farm in Denmark, Pleasant Valley wind farm in Minnesota, and the Vantage wind farm in Washington are carried out. The computed power shows a good agreement with measurements. Analysis of the wake dynamics in the three wind farms is underway and will be presented in the conference. This work was support by Xcel Energy (RD4-13). The computational resources were provided by National Renewable Energy Laboratory.

  11. Characterization of a subwavelength-scale 3D void structure using the FDTD-based confocal laser scanning microscopic image mapping technique.

    PubMed

    Choi, Kyongsik; Chon, James W; Gu, Min; Lee, Byoungho

    2007-08-20

    In this paper, a simple confocal laser scanning microscopic (CLSM) image mapping technique based on the finite-difference time domain (FDTD) calculation has been proposed and evaluated for characterization of a subwavelength-scale three-dimensional (3D) void structure fabricated inside polymer matrix. The FDTD simulation method adopts a focused Gaussian beam incident wave, Berenger's perfectly matched layer absorbing boundary condition, and the angular spectrum analysis method. Through the well matched simulation and experimental results of the xz-scanned 3D void structure, we first characterize the exact position and the topological shape factor of the subwavelength-scale void structure, which was fabricated by a tightly focused ultrashort pulse laser. The proposed CLSM image mapping technique based on the FDTD can be widely applied from the 3D near-field microscopic imaging, optical trapping, and evanescent wave phenomenon to the state-of-the-art bio- and nanophotonics.

  12. Sensing, Measuring and Modelling the Mechanical Properties of Sandstone

    NASA Astrophysics Data System (ADS)

    Antony, S. J.; Olugbenga, A.; Ozerkan, N. G.

    2018-02-01

    We present a hybrid framework for simulating the strength and dilation characteristics of sandstone. Where possible, the grain-scale properties of sandstone are evaluated experimentally in detail. Also, using photo-stress analysis, we sense the deviator stress (/strain) distribution at the micro-scale and its components along the orthogonal directions on the surface of a V-notch sandstone sample under mechanical loading. Based on this measurement and applying a grain-scale model, the optical anisotropy index K 0 is inferred at the grain scale. This correlated well with the grain contact stiffness ratio K evaluated using ultrasound sensors independently. Thereafter, in addition to other experimentally characterised structural and grain-scale properties of sandstone, K is fed as an input into the discrete element modelling of fracture strength and dilation of the sandstone samples. Physical bulk-scale experiments are also conducted to evaluate the load-displacement relation, dilation and bulk fracture strength characteristics of sandstone samples under compression and shear. A good level of agreement is obtained between the results of the simulations and experiments. The current generic framework could be applied to understand the internal and bulk mechanical properties of such complex opaque and heterogeneous materials more realistically in future.

  13. A novel method of multi-scale simulation of macro-scale deformation and microstructure evolution on metal forming

    NASA Astrophysics Data System (ADS)

    Huang, Shiquan; Yi, Youping; Li, Pengchuan

    2011-05-01

    In recent years, multi-scale simulation technique of metal forming is gaining significant attention for prediction of the whole deformation process and microstructure evolution of product. The advances of numerical simulation at macro-scale level on metal forming are remarkable and the commercial FEM software, such as Deform2D/3D, has found a wide application in the fields of metal forming. However, the simulation method of multi-scale has little application due to the non-linearity of microstructure evolution during forming and the difficulty of modeling at the micro-scale level. This work deals with the modeling of microstructure evolution and a new method of multi-scale simulation in forging process. The aviation material 7050 aluminum alloy has been used as example for modeling of microstructure evolution. The corresponding thermal simulated experiment has been performed on Gleeble 1500 machine. The tested specimens have been analyzed for modeling of dislocation density, nucleation and growth of recrystallization(DRX). The source program using cellular automaton (CA) method has been developed to simulate the grain nucleation and growth, in which the change of grain topology structure caused by the metal deformation was considered. The physical fields at macro-scale level such as temperature field, stress and strain fields, which can be obtained by commercial software Deform 3D, are coupled with the deformed storage energy at micro-scale level by dislocation model to realize the multi-scale simulation. This method was explained by forging process simulation of the aircraft wheel hub forging. Coupled the results of Deform 3D with CA results, the forging deformation progress and the microstructure evolution at any point of forging could be simulated. For verifying the efficiency of simulation, experiments of aircraft wheel hub forging have been done in the laboratory and the comparison of simulation and experiment result has been discussed in details.

  14. In-Flight Stability Analysis of the X-48B Aircraft

    NASA Technical Reports Server (NTRS)

    Regan, Christopher D.

    2008-01-01

    This report presents the system description, methods, and sample results of the in-flight stability analysis for the X-48B, Blended Wing Body Low-Speed Vehicle. The X-48B vehicle is a dynamically scaled, remotely piloted vehicle developed to investigate the low-speed control characteristics of a full-scale blended wing body. Initial envelope clearance was conducted by analyzing the stability margin estimation resulting from the rigid aircraft response during flight and comparing it to simulation data. Short duration multisine signals were commanded onboard to simultaneously excite the primary rigid body axes. In-flight stability analysis has proven to be a critical component of the initial envelope expansion.

  15. An Image-based Micro-continuum Pore-scale Model for Gas Transport in Organic-rich Shale

    NASA Astrophysics Data System (ADS)

    Guo, B.; Tchelepi, H.

    2017-12-01

    Gas production from unconventional source rocks, such as ultra-tight shales, has increased significantly over the past decade. However, due to the extremely small pores ( 1-100 nm) and the strong material heterogeneity, gas flow in shale is still not well understood and poses challenges for predictive field-scale simulations. In recent years, digital rock analysis has been applied to understand shale gas transport at the pore-scale. An issue with rock images (e.g. FIB-SEM, nano-/micro-CT images) is the so-called "cutoff length", i.e., pores and heterogeneities below the resolution cannot be resolved, which leads to two length scales (resolved features and unresolved sub-resolution features) that are challenging for flow simulations. Here we develop a micro-continuum model, modified from the classic Darcy-Brinkman-Stokes framework, that can naturally couple the resolved pores and the unresolved nano-porous regions. In the resolved pores, gas flow is modeled with Stokes equation. In the unresolved regions where the pore sizes are below the image resolution, we develop an apparent permeability model considering non-Darcy flow at the nanoscale including slip flow, Knudsen diffusion, adsorption/desorption, surface diffusion, and real gas effect. The end result is a micro-continuum pore-scale model that can simulate gas transport in 3D reconstructed shale images. The model has been implemented in the open-source simulation platform OpenFOAM. In this paper, we present case studies to demonstrate the applicability of the model, where we use 3D segmented FIB-SEM and nano-CT shale images that include four material constituents: organic matter, clay, granular mineral, and pore. In addition to the pore structure and the distribution of the material constituents, we populate the model with experimental measurements (e.g. size distribution of the sub-resolution pores from nitrogen adsorption) and parameters from the literature and identify the relative importance of different physics on gas production. Overall, the micro-continuum model provides a novel tool for digital rock analysis of organic-rich shale.

  16. Scaling Characteristics of Mesoscale Wind Fields in the Lower Atmospheric Boundary Layer: Implications for Wind Energy

    NASA Astrophysics Data System (ADS)

    Kiliyanpilakkil, Velayudhan Praju

    Atmospheric motions take place in spatial scales of sub-millimeters to few thousands of kilometers with temporal changes in the atmospheric variables occur in fractions of seconds to several years. Consequently, the variations in atmospheric kinetic energy associated with these atmospheric motions span over a broad spectrum of space and time. The mesoscale region acts as an energy transferring regime between the energy generating synoptic scale and the energy dissipating microscale. Therefore, the scaling characterizations of mesoscale wind fields are significant in the accurate estimation of the atmospheric energy budget. Moreover, the precise knowledge of the scaling characteristics of atmospheric mesoscale wind fields is important for the validation of the numerical models those focus on wind forecasting, dispersion, diffusion, horizontal transport, and optical turbulence. For these reasons, extensive studies have been conducted in the past to characterize the mesoscale wind fields. Nevertheless, the majority of these studies focused on near-surface and upper atmosphere mesoscale regimes. The present study attempt to identify the existence and to quantify the scaling of mesoscale wind fields in the lower atmospheric boundary layer (ABL; in the wind turbine layer) using wind observations from various research-grade instruments (e.g., sodars, anemometers). The scaling characteristics of the mesoscale wind speeds over diverse homogeneous flat terrains, conducted using structure function based analysis, revealed an altitudinal dependence of the scaling exponents. This altitudinal dependence of the wind speed scaling may be attributed to the buoyancy forcing. Subsequently, we use the framework of extended self-similarity (ESS) to characterize the observed scaling behavior. In the ESS framework, the relative scaling exponents of the mesoscale atmospheric boundary layer wind speed exhibit quasi-universal behavior; even far beyond the inertial range of turbulence (Delta t within 10 minutes to 6 hours range). The ESS framework based study is extended further to enquire its validity over complex terrain. This study, based on multiyear wind observations, demonstrate that the ESS holds for the lower ABL wind speed over the complex terrain as well. Another important inference from this study is that the ESS relative scaling exponents corresponding to the mesoscale wind speed closely matches the scaling characteristics of the inertial range turbulence, albeit not exactly identical. The current study proposes benchmark using ESS-based quasi-universal wind speed scaling characteristics in the ABL for the mesoscale modeling community. Using a state-of-the-art atmospheric mesoscale model in conjunction with different planetary boundary layer (PBL) parameterization schemes, multiple wind speed simulations have been conducted. This study reveals that the ESS scaling characteristics of the model simulated wind speed time series in the lower ABL vary significantly from their observational counterparts. The study demonstrate that the model simulated wind speed time series for the time intervals Delta t < 2 hours do not capture the ESS-based scaling characteristics. The detailed analysis of model simulations using different PBL schemes lead to the conclusion that there is a need for significant improvements in the turbulent closure parameterizations adapted in the new-generation atmospheric models. This study is unique as the ESS framework has never been reported or examined for the validation of PBL parameterizations.

  17. Role of monsoon intraseasonal oscillation and its interannual variability in simulation of seasonal mean in CFSv2

    NASA Astrophysics Data System (ADS)

    Pillai, Prasanth A.; Aher, Vaishali R.

    2018-01-01

    Intraseasonal oscillation (ISO), which appears as "active" and "break" spells of rainfall, is an important component of Indian summer monsoon (ISM). The present study investigates the potential of new National Centre for Environmental Prediction (NCEP) climate forecast system version 2 (CFSv2) in simulating the ISO with emphasis to its interannual variability (IAV) and its possible role in the seasonal mean rainfall. The present analysis shows that the spatial distribution of CFSv2 rainfall has noticeable differences with observations in both ISO and IAV time scales. Active-break cycle of CFSv2 has similar evolution during both strong and weak years. Regardless of a reasonable El Niño Southern Oscillation (ENSO)-monsoon teleconnection in the model, the overestimated Arabian Sea (AS) sea surface temperature (SST)-convection relationship hinters the large-scale influence of ENSO over the ISM region and adjacent oceans. The ISO scale convections over AS and Bay of Bengal (BoB) have noteworthy contribution to the seasonal mean rainfall, opposing the influence of boundary forcing in these areas. At the same time, overwhelming contribution of ISO component over AS towards the seasonal mean modifies the effect of slow varying boundary forcing to large-scale summer monsoon. The results here underline that, along with the correct simulation of monsoon ISO, its IAV and relationship with the boundary forcing also need to be well captured in coupled models for the accurate simulation of seasonal mean anomalies of the monsoon and its teleconnections.

  18. Contextual Compression of Large-Scale Wind Turbine Array Simulations: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.« less

  19. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.« less

  20. Improved Understanding of the Modeled QBO Using MLS Observations and MERRA Reanalysis

    NASA Technical Reports Server (NTRS)

    Oman, Luke David; Douglass, Anne Ritger; Hurwitz, Maggie M.; Garfinkel, Chaim I.

    2013-01-01

    The Quasi-Biennial Oscillation (QBO) dominates the variability of the tropical stratosphere on interannual time scales. The QBO has been shown to extend its influence into the chemical composition of this region through dynamical mechanisms. We have started our analysis using the realistic QBO internally generated by the Goddard Earth Observing System Version 5 (GEOS-5) general circulation model coupled to a comprehensive stratospheric and tropospheric chemical mechanism forced with observed sea surface temperatures over the past 33 years. We will show targeted comparisons with observations from NASAs Aura satellite Microwave Limb Sounder (MLS) and the Modern Era Retrospective-Analysis for Research and Applications (MERRA) reanalysis to provide insight into the simulation of the primary and secondary circulations associated with the QBO. Using frequency spectrum analysis and multiple linear regression we can illuminate the resulting circulations and deduce the strengths and weaknesses in their modeled representation. Inclusion of the QBO in our simulation improves the representation of the subtropical barriers and overall tropical variability. The QBO impact on tropical upwelling is important to quantify when calculating trends in sub-decadal scale datasets.

Top