Sample records for multi-professional full-scale simulation

  1. SMR Re-Scaling and Modeling for Load Following Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoover, K.; Wu, Q.; Bragg-Sitton, S.

    2016-11-01

    This study investigates the creation of a new set of scaling parameters for the Oregon State University Multi-Application Small Light Water Reactor (MASLWR) scaled thermal hydraulic test facility. As part of a study being undertaken by Idaho National Lab involving nuclear reactor load following characteristics, full power operations need to be simulated, and therefore properly scaled. Presented here is the scaling analysis and plans for RELAP5-3D simulation.

  2. Multi-mode evaluation of power-maximizing cross-flow turbine controllers

    DOE PAGES

    Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James; ...

    2017-09-21

    A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less

  3. Multi-mode evaluation of power-maximizing cross-flow turbine controllers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James

    A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less

  4. Digital Full-Scope Simulation of a Conventional Nuclear Power Plant Control Room, Phase 2: Installation of a Reconfigurable Simulator to Support Nuclear Plant Sustainability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Vivek Agarwal; Kirk Fitzgerald

    2013-03-01

    The U.S. Department of Energy’s Light Water Reactor Sustainability program has developed a control room simulator in support of control room modernization at nuclear power plants in the U.S. This report highlights the recent completion of this reconfigurable, full-scale, full-scope control room simulator buildout at the Idaho National Laboratory. The simulator is fully reconfigurable, meaning it supports multiple plant models developed by different simulator vendors. The simulator is full-scale, using glasstop virtual panels to display the analog control boards found at current plants. The present installation features 15 glasstop panels, uniquely achieving a complete control room representation. The simulator ismore » also full-scope, meaning it uses the same plant models used for training simulators at actual plants. Unlike in the plant training simulators, the deployment on glasstop panels allows a high degree of customization of the panels, allowing the simulator to be used for research on the design of new digital control systems for control room modernization. This report includes separate sections discussing the glasstop panels, their layout to mimic control rooms at actual plants, technical details on creating a multi-plant and multi-vendor reconfigurable simulator, and current efforts to support control room modernization at U.S. utilities. The glasstop simulator provides an ideal testbed for prototyping and validating new control room concepts. Equally importantly, it is helping create a standardized and vetted human factors engineering process that can be used across the nuclear industry to ensure control room upgrades maintain and even improve current reliability and safety.« less

  5. GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts

    NASA Astrophysics Data System (ADS)

    Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.

    2007-12-01

    The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.

  6. Adaptation of non-technical skills behavioural markers for delivery room simulation.

    PubMed

    Bracco, Fabrizio; Masini, Michele; De Tonetti, Gabriele; Brogioni, Francesca; Amidani, Arianna; Monichino, Sara; Maltoni, Alessandra; Dato, Andrea; Grattarola, Claudia; Cordone, Massimo; Torre, Giancarlo; Launo, Claudio; Chiorri, Carlo; Celleno, Danilo

    2017-03-17

    Simulation in healthcare has proved to be a useful method in improving skills and increasing the safety of clinical operations. The debriefing session, after the simulated scenario, is the core of the simulation, since it allows participants to integrate the experience with the theoretical frameworks and the procedural guidelines. There is consistent evidence for the relevance of non-technical skills (NTS) for the safe and efficient accomplishment of operations. However, the observation, assessment and feedback on these skills is particularly complex, because the process needs expert observers and the feedback is often provided in judgmental and ineffective ways. The aim of this study was therefore to develop and test a set of observation and rating forms for the NTS behavioural markers of multi-professional teams involved in delivery room emergency simulations (MINTS-DR, Multi-professional Inventory for Non-Technical Skills in the Delivery Room). The MINTS-DR was developed by adapting the existing tools and, when needed, by designing new tools according to the literature. We followed a bottom-up process accompanied by interviews and co-design between practitioners and psychology experts. The forms were specific for anaesthetists, gynaecologists, nurses/midwives, assistants, plus a global team assessment tool. We administered the tools in five editions of a simulation training course that involved 48 practitioners. Ratings on usability and usefulness were collected. The mean ratings of the usability and usefulness of the tools were not statistically different to or higher than 4 on a 5-point rating scale. In either case no significant differences were found across professional categories. The MINTS-DR is quick and easy to administer. It is judged to be a useful asset in maximising the learning experience that is provided by the simulation.

  7. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  8. Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi

    Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.

  9. Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique

    DOE PAGES

    Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi

    2017-06-01

    Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.

  10. Multi-scale gyrokinetic simulations of an Alcator C-Mod, ELM-y H-mode plasma

    NASA Astrophysics Data System (ADS)

    Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Rodriguez-Fernandez, P.; Candy, J.; Creely, A. J.

    2018-01-01

    High fidelity, multi-scale gyrokinetic simulations capable of capturing both ion ({k}θ {ρ }s∼ { O }(1.0)) and electron-scale ({k}θ {ρ }e∼ { O }(1.0)) turbulence were performed in the core of an Alcator C-Mod ELM-y H-mode discharge which exhibits reactor-relevant characteristics. These simulations, performed with all experimental inputs and realistic ion to electron mass ratio ({({m}i/{m}e)}1/2=60.0) provide insight into the physics fidelity that may be needed for accurate simulation of the core of fusion reactor discharges. Three multi-scale simulations and series of separate ion and electron-scale simulations performed using the GYRO code (Candy and Waltz 2003 J. Comput. Phys. 186 545) are presented. As with earlier multi-scale results in L-mode conditions (Howard et al 2016 Nucl. Fusion 56 014004), both ion and multi-scale simulations results are compared with experimentally inferred ion and electron heat fluxes, as well as the measured values of electron incremental thermal diffusivities—indicative of the experimental electron temperature profile stiffness. Consistent with the L-mode results, cross-scale coupling is found to play an important role in the simulation of these H-mode conditions. Extremely stiff ion-scale transport is observed in these high-performance conditions which is shown to likely play and important role in the reproduction of measurements of perturbative transport. These results provide important insight into the role of multi-scale plasma turbulence in the core of reactor-relevant plasmas and establish important constraints on the the fidelity of models needed for predictive simulations.

  11. A Multi-Stage Method for Connecting Participatory Sensing and Noise Simulations

    PubMed Central

    Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui

    2015-01-01

    Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for noise simulations on multiple spatio-temporal scales. PMID:25621604

  12. A multi-stage method for connecting participatory sensing and noise simulations.

    PubMed

    Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui

    2015-01-22

    Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for noise simulations on multiple spatio-temporal scales.

  13. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; He, Ya-Ling; Kang, Qinjun

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less

  14. Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.

    New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less

  15. Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod

    DOE PAGES

    Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.; ...

    2017-03-02

    New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less

  16. Simulation of the optical coating deposition

    NASA Astrophysics Data System (ADS)

    Grigoriev, Fedor; Sulimov, Vladimir; Tikhonravov, Alexander

    2018-04-01

    A brief review of the mathematical methods of thin-film growth simulation and results of their applications is presented. Both full-atomistic and multi-scale approaches that were used in the studies of thin-film deposition are considered. The results of the structural parameter simulation including density profiles, roughness, porosity, point defect concentration, and others are discussed. The application of the quantum level methods to the simulation of the thin-film electronic and optical properties is considered. Special attention is paid to the simulation of the silicon dioxide thin films.

  17. A novel method of multi-scale simulation of macro-scale deformation and microstructure evolution on metal forming

    NASA Astrophysics Data System (ADS)

    Huang, Shiquan; Yi, Youping; Li, Pengchuan

    2011-05-01

    In recent years, multi-scale simulation technique of metal forming is gaining significant attention for prediction of the whole deformation process and microstructure evolution of product. The advances of numerical simulation at macro-scale level on metal forming are remarkable and the commercial FEM software, such as Deform2D/3D, has found a wide application in the fields of metal forming. However, the simulation method of multi-scale has little application due to the non-linearity of microstructure evolution during forming and the difficulty of modeling at the micro-scale level. This work deals with the modeling of microstructure evolution and a new method of multi-scale simulation in forging process. The aviation material 7050 aluminum alloy has been used as example for modeling of microstructure evolution. The corresponding thermal simulated experiment has been performed on Gleeble 1500 machine. The tested specimens have been analyzed for modeling of dislocation density, nucleation and growth of recrystallization(DRX). The source program using cellular automaton (CA) method has been developed to simulate the grain nucleation and growth, in which the change of grain topology structure caused by the metal deformation was considered. The physical fields at macro-scale level such as temperature field, stress and strain fields, which can be obtained by commercial software Deform 3D, are coupled with the deformed storage energy at micro-scale level by dislocation model to realize the multi-scale simulation. This method was explained by forging process simulation of the aircraft wheel hub forging. Coupled the results of Deform 3D with CA results, the forging deformation progress and the microstructure evolution at any point of forging could be simulated. For verifying the efficiency of simulation, experiments of aircraft wheel hub forging have been done in the laboratory and the comparison of simulation and experiment result has been discussed in details.

  18. Initial conditions and modeling for simulations of shock driven turbulent material mixing

    DOE PAGES

    Grinstein, Fernando F.

    2016-11-17

    Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarsermore » grids, tend to be preferred for faster turnaround in full-scale configurations.« less

  19. Validation of a multi-phase plant-wide model for the description of the aeration process in a WWTP.

    PubMed

    Lizarralde, I; Fernández-Arévalo, T; Beltrán, S; Ayesa, E; Grau, P

    2018-02-01

    This paper introduces a new mathematical model built under the PC-PWM methodology to describe the aeration process in a full-scale WWTP. This methodology enables a systematic and rigorous incorporation of chemical and physico-chemical transformations into biochemical process models, particularly for the description of liquid-gas transfer to describe the aeration process. The mathematical model constructed is able to reproduce biological COD and nitrogen removal, liquid-gas transfer and chemical reactions. The capability of the model to describe the liquid-gas mass transfer has been tested by comparing simulated and experimental results in a full-scale WWTP. Finally, an exploration by simulation has been undertaken to show the potential of the mathematical model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Ignition sensitivity study of an energetic train configuration using experiments and simulation

    NASA Astrophysics Data System (ADS)

    Kim, Bohoon; Yu, Hyeonju; Yoh, Jack J.

    2018-06-01

    A full scale hydrodynamic simulation intended for the accurate description of shock-induced detonation transition was conducted as a part of an ignition sensitivity analysis of an energetic component system. The system is composed of an exploding foil initiator (EFI), a donor explosive unit, a stainless steel gap, and an acceptor explosive. A series of velocity interferometer system for any reflector measurements were used to validate the hydrodynamic simulations based on the reactive flow model that describes the initiation of energetic materials arranged in a train configuration. A numerical methodology with ignition and growth mechanisms for tracking multi-material boundary interactions as well as severely transient fluid-structure coupling between high explosive charges and metal gap is described. The free surface velocity measurement is used to evaluate the sensitivity of energetic components that are subjected to strong pressure waves. Then, the full scale hydrodynamic simulation is performed on the flyer impacted initiation of an EFI driven pyrotechnical system.

  1. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  2. Formalizing Knowledge in Multi-Scale Agent-Based Simulations

    PubMed Central

    Somogyi, Endre; Sluka, James P.; Glazier, James A.

    2017-01-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused. PMID:29338063

  3. Formalizing Knowledge in Multi-Scale Agent-Based Simulations.

    PubMed

    Somogyi, Endre; Sluka, James P; Glazier, James A

    2016-10-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused.

  4. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  5. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model

    PubMed Central

    van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.

    2018-01-01

    The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620

  6. Microphysics in Multi-scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  7. Teaching Cockpit Automation in the Classroom

    NASA Technical Reports Server (NTRS)

    Casner, Stephen M.

    2003-01-01

    This study explores the idea of teaching fundamental cockpit automation concepts and skills to aspiring professional pilots in a classroom setting, without the use of sophisticated aircraft or equipment simulators. Pilot participants from a local professional pilot academy completed eighteen hours of classroom instruction that placed a strong emphasis on understanding the underlying principles of cockpit automation systems and their use in a multi-crew cockpit. The instructional materials consisted solely of a single textbook. Pilots received no hands-on instruction or practice during their training. At the conclusion of the classroom instruction, pilots completed a written examination testing their mastery of what had been taught during the classroom meetings. Following the written exam, each pilot was given a check flight in a full-mission Level D simulator of a Boeing 747-400 aircraft. Pilots were given the opportunity to fly one practice leg, and were then tested on all concepts and skills covered in the class during a second leg. The results of the written exam and simulator checks strongly suggest that instruction delivered in a traditional classroom setting can lead to high levels of preparation without the need for expensive airplane or equipment simulators.

  8. Evaluating the Performance of the Goddard Multi-Scale Modeling Framework against GPM, TRMM and CloudSat/CALIPSO Products

    NASA Astrophysics Data System (ADS)

    Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.

    2014-12-01

    Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.

  9. Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame

    NASA Astrophysics Data System (ADS)

    Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank

    2017-10-01

    This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.

  10. Prototype part task trainer: A remote manipulator system simulator

    NASA Technical Reports Server (NTRS)

    Shores, David

    1989-01-01

    The Part Task Trainer program (PTT) is a kinematic simulation of the Remote Manipulator System (RMS) for the orbiter. The purpose of the PTT is to supply a low cost man-in-the-loop simulator, allowing the student to learn operational procedures which then can be used in the more expensive full scale simulators. PTT will allow the crew members to work on their arm operation skills without the need for other people running the simulation. The controlling algorithms for the arm were coded out of the Functional Subsystem Requirements Document to ensure realistic operation of the simulation. Relying on the hardware of the workstation to provide fast refresh rates for full shaded images allows the simulation to be run on small low cost stand alone work stations, removing the need to be tied into a multi-million dollar computer for the simulation. PTT will allow the student to make errors which in full scale mock up simulators might cause failures or damage hardware. On the screen the user is shown a graphical representation of the RMS control panel in the aft cockpit of the orbiter, along with a main view window and up to six trunion and guide windows. The dials drawn on the panel may be turned to select the desired mode of operation. The inputs controlling the arm are read from a chair with a Translational Hand Controller (THC) and a Rotational Hand Controller (RHC) attached to it.

  11. A concept for major incident triage: full-scaled simulation feasibility study.

    PubMed

    Rehn, Marius; Andersen, Jan E; Vigerust, Trond; Krüger, Andreas J; Lossius, Hans M

    2010-08-11

    Efficient management of major incidents involves triage, treatment and transport. In the absence of a standardised interdisciplinary major incident management approach, the Norwegian Air Ambulance Foundation developed Interdisciplinary Emergency Service Cooperation Course (TAS). The TAS-program was established in 1998 and by 2009, approximately 15 500 emergency service professionals have participated in one of more than 500 no-cost courses. The TAS-triage concept is based on the established triage Sieve and Paediatric Triage Tape models but modified with slap-wrap reflective triage tags and paediatric triage stretchers. We evaluated the feasibility and accuracy of the TAS-triage concept in full-scale simulated major incidents. The learners participated in two standardised bus crash simulations: without and with competence of TAS-triage and access to TAS-triage equipment. The instructors calculated triage accuracy and measured time consumption while the learners participated in a self-reported before-after study. Each question was scored on a 7-point Likert scale with points labelled "Did not work" (1) through "Worked excellent" (7). Among the 93 (85%) participating emergency service professionals, 48% confirmed the existence of a major incident triage system in their service, whereas 27% had access to triage tags. The simulations without TAS-triage resulted in a mean over- and undertriage of 12%. When TAS-Triage was used, no mistriage was found. The average time from "scene secured to all patients triaged" was 22 minutes (range 15-32) without TAS-triage vs. 10 minutes (range 5-21) with TAS-triage. The participants replied to "How did interdisciplinary cooperation of triage work?" with mean 4,9 (95% CI 4,7-5,2) before the course vs. mean 5,8 (95% CI 5,6-6,0) after the course, p < 0,001. Our modified triage Sieve tool is feasible, time-efficient and accurate in allocating priority during simulated bus accidents and may serve as a candidate for a future national standard for major incident triage.

  12. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  13. Multi-scale gyrokinetic simulation of Alcator C-Mod tokamak discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howard, N. T., E-mail: nthoward@psfc.mit.edu; White, A. E.; Greenwald, M.

    2014-03-15

    Alcator C-Mod tokamak discharges have been studied with nonlinear gyrokinetic simulation simultaneously spanning both ion and electron spatiotemporal scales. These multi-scale simulations utilized the gyrokinetic model implemented by GYRO code [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and the approximation of reduced electron mass (μ = (m{sub D}/m{sub e}){sup .5} = 20.0) to qualitatively study a pair of Alcator C-Mod discharges: a low-power discharge, previously demonstrated (using realistic mass, ion-scale simulation) to display an under-prediction of the electron heat flux and a high-power discharge displaying agreement with both ion and electron heat flux channels [N. T. Howard et al.,more » Nucl. Fusion 53, 123011 (2013)]. These multi-scale simulations demonstrate the importance of electron-scale turbulence in the core of conventional tokamak discharges and suggest it is a viable candidate for explaining the observed under-prediction of electron heat flux. In this paper, we investigate the coupling of turbulence at the ion (k{sub θ}ρ{sub s}∼O(1.0)) and electron (k{sub θ}ρ{sub e}∼O(1.0)) scales for experimental plasma conditions both exhibiting strong (high-power) and marginally stable (low-power) low-k (k{sub θ}ρ{sub s} < 1.0) turbulence. It is found that reduced mass simulation of the plasma exhibiting marginally stable low-k turbulence fails to provide even qualitative insight into the turbulence present in the realistic plasma conditions. In contrast, multi-scale simulation of the plasma condition exhibiting strong turbulence provides valuable insight into the coupling of the ion and electron scales.« less

  14. Translating patient reported outcome measures: methodological issues explored using cognitive interviewing with three rheumatoid arthritis measures in six European languages.

    PubMed

    Hewlett, Sarah; Nicklin, Joanna; Bode, Chistina; Carmona, Loreto; Dures, Emma; Engelbrecht, Matthias; Hagel, Sofia; Kirwan, John; Molto, Anna; Redondo, Marta; Gossec, Laure

    2016-06-01

    Cross-cultural translation of patient-reported outcome measures (PROMs) is a lengthy process, often performed professionally. Cognitive interviewing assesses patient comprehension of PROMs. The objective was to evaluate the usefulness of cognitive interviewing to assess translations and compare professional (full) with non-professional (simplified) translation processes. A full protocol used for the Bristol RA Fatigue Multi-dimensional Questionnaire and Numerical Rating Scale (BRAF-MDQ, BRAF-NRS) was compared with a simplified protocol used for the RA Impact of Disease scale (RAID). RA patients in the UK, France, the Netherlands, Germany, Spain and Sweden completed the PROMs during cognitive interviewing (BRAFs in the UK were omitted as these were performed during development). Transcripts were deductively analysed for understanding, information retrieval, judgement and response options. Usefulness of cognitive interviewing was assessed by the nature of problems identified, and translation processes by percentage of consistently problematic items (⩾40% patients per country with similar concerns). Sixty patients participated (72% women). For the BRAFs (full protocol) one problematic item was identified (of 23 items × 5 languages, 1/115 = 0.9%). For the RAID (simplified protocol) two problematic items were identified (of 7 items × 6 languages, 2/42 = 4.8%), of which one was revised (Dutch). Coping questions were problematic in both PROMs. Conceptual and cultural challenges though rare were important, as identified by formal evaluation, demonstrating that cognitive interviewing is crucial in PROM translations. Proportionately fewer problematic items were found for the full than for the simplified translation procedure, suggesting that while both are acceptable, professional PROM translation might be preferable. Coping may be a particularly challenging notion cross-culturally. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Blade Displacement Measurements of the Full-Scale UH-60A Airloads Rotor

    NASA Technical Reports Server (NTRS)

    Barrows, Danny A.; Burner, Alpheus W.; Abrego, Anita I.; Olson, Lawrence E.

    2011-01-01

    Blade displacement measurements were acquired during a wind tunnel test of the full-scale UH-60A Airloads rotor. The test was conducted in the 40- by 80-Foot Wind Tunnel of the National Full-Scale Aerodynamics Complex at NASA Ames Research Center. Multi-camera photogrammetry was used to measure the blade displacements of the four-bladed rotor. These measurements encompass a range of test conditions that include advance ratios from 0.15 to unique slowed-rotor simulations as high as 1.0, thrust coefficient to rotor solidity ratios from 0.01 to 0.13, and rotor shaft angles from -10.0 to 8.0 degrees. The objective of these measurements is to provide a benchmark blade displacement database to be utilized in the development and validation of rotorcraft computational tools. The methodology, system development, measurement techniques, and preliminary sample blade displacement measurements are presented.

  16. Comparison of multi-fluid moment models with particle-in-cell simulations of collisionless magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Liang, E-mail: liang.wang@unh.edu; Germaschewski, K.; Hakim, Ammar H.

    2015-01-15

    We introduce an extensible multi-fluid moment model in the context of collisionless magnetic reconnection. This model evolves full Maxwell equations and simultaneously moments of the Vlasov-Maxwell equation for each species in the plasma. Effects like electron inertia and pressure gradient are self-consistently embedded in the resulting multi-fluid moment equations, without the need to explicitly solving a generalized Ohm's law. Two limits of the multi-fluid moment model are discussed, namely, the five-moment limit that evolves a scalar pressures for each species and the ten-moment limit that evolves the full anisotropic, non-gyrotropic pressure tensor for each species. We first demonstrate analytically andmore » numerically that the five-moment model reduces to the widely used Hall magnetohydrodynamics (Hall MHD) model under the assumptions of vanishing electron inertia, infinite speed of light, and quasi-neutrality. Then, we compare ten-moment and fully kinetic particle-in-cell (PIC) simulations of a large scale Harris sheet reconnection problem, where the ten-moment equations are closed with a local linear collisionless approximation for the heat flux. The ten-moment simulation gives reasonable agreement with the PIC results regarding the structures and magnitudes of the electron flows, the polarities and magnitudes of elements of the electron pressure tensor, and the decomposition of the generalized Ohm's law. Possible ways to improve the simple local closure towards a nonlocal fully three-dimensional closure are also discussed.« less

  17. FAST: A multi-processed environment for visualization of computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon V.; Merritt, Fergus J.; Plessel, Todd C.; Kelaita, Paul G.; Mccabe, R. Kevin

    1991-01-01

    Three-dimensional, unsteady, multi-zoned fluid dynamics simulations over full scale aircraft are typical of the problems being investigated at NASA Ames' Numerical Aerodynamic Simulation (NAS) facility on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10-30 Mflop range, we feel that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These larger, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this writing. The visualization techniques will change as the supercomputing environment, and hence the scientific methods employed, evolves even further. The Flow Analysis Software Toolkit (FAST), an implementation of a software system for fluid mechanics analysis, is discussed.

  18. Simulation of Left Atrial Function Using a Multi-Scale Model of the Cardiovascular System

    PubMed Central

    Pironet, Antoine; Dauby, Pierre C.; Paeme, Sabine; Kosta, Sarah; Chase, J. Geoffrey; Desaive, Thomas

    2013-01-01

    During a full cardiac cycle, the left atrium successively behaves as a reservoir, a conduit and a pump. This complex behavior makes it unrealistic to apply the time-varying elastance theory to characterize the left atrium, first, because this theory has known limitations, and second, because it is still uncertain whether the load independence hypothesis holds. In this study, we aim to bypass this uncertainty by relying on another kind of mathematical model of the cardiac chambers. In the present work, we describe both the left atrium and the left ventricle with a multi-scale model. The multi-scale property of this model comes from the fact that pressure inside a cardiac chamber is derived from a model of the sarcomere behavior. Macroscopic model parameters are identified from reference dog hemodynamic data. The multi-scale model of the cardiovascular system including the left atrium is then simulated to show that the physiological roles of the left atrium are correctly reproduced. This include a biphasic pressure wave and an eight-shaped pressure-volume loop. We also test the validity of our model in non basal conditions by reproducing a preload reduction experiment by inferior vena cava occlusion with the model. We compute the variation of eight indices before and after this experiment and obtain the same variation as experimentally observed for seven out of the eight indices. In summary, the multi-scale mathematical model presented in this work is able to correctly account for the three roles of the left atrium and also exhibits a realistic left atrial pressure-volume loop. Furthermore, the model has been previously presented and validated for the left ventricle. This makes it a proper alternative to the time-varying elastance theory if the focus is set on precisely representing the left atrial and left ventricular behaviors. PMID:23755183

  19. Development of an Efficient Meso- scale Multi-phase Flow Solver in Nuclear Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taehun

    2015-10-20

    The proposed research aims at formulating a predictive high-order Lattice Boltzmann Equation for multi-phase flows relevant to nuclear energy related application - namely, saturated and sub-cooled boiling in reactors, and liquid- liquid mixing and extraction for fuel cycle separation. An efficient flow solver will be developed based on the Finite Element based Lattice Boltzmann Method (FE- LBM), accounting for phase-change heat transfer and capable of treating multiple phases over length scales from the submicron to the meter. A thermal LBM will be developed in order to handle adjustable Prandtl number, arbitrary specific heat ratio, a wide range of temperature variations,more » better numerical stability during liquid-vapor phase change, and full thermo-hydrodynamic consistency. Two-phase FE-LBM will be extended to liquid–liquid–gas multi-phase flows for application to high-fidelity simulations building up from the meso-scale up to the equipment sub-component scale. While several relevant applications exist, the initial applications for demonstration of the efficient methods to be developed as part of this project include numerical investigations of Critical Heat Flux (CHF) phenomena in nuclear reactor fuel bundles, and liquid-liquid mixing and interfacial area generation for liquid-liquid separations. In addition, targeted experiments will be conducted for validation of this advanced multi-phase model.« less

  20. Multiscale Simulation of Blood Flow in Brain Arteries with an Aneurysm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leopold Grinberg; Vitali Morozov; Dmitry A. Fedosov

    2013-04-24

    Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations.This animation presents results of studies used in the development of a multi-scale visualization methodology. First we use streamlines to show the path the flow is taking as it moves through the system, including the aneurysm. Next we investigate themore » process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of the aneurysm.« less

  1. Simulation training for improving the quality of care for older people: an independent evaluation of an innovative programme for inter-professional education.

    PubMed

    Ross, Alastair J; Anderson, Janet E; Kodate, Naonori; Thomas, Libby; Thompson, Kellie; Thomas, Beth; Key, Suzie; Jensen, Heidi; Schiff, Rebekah; Jaye, Peter

    2013-06-01

    This paper describes the evaluation of a 2-day simulation training programme for staff designed to improve teamwork and inpatient care and compassion in an older persons' unit. The programme was designed to improve inpatient care for older people by using mixed modality simulation exercises to enhance teamwork and empathetic and compassionate care. Healthcare professionals took part in: (a) a 1-day human patient simulation course with six scenarios and (b) a 1-day ward-based simulation course involving five 1-h exercises with integrated debriefing. A mixed methods evaluation included observations of the programme, precourse and postcourse confidence rating scales and follow-up interviews with staff at 7-9 weeks post-training. Observations showed enjoyment of the course but some anxiety and apprehension about the simulation environment. Staff self-confidence improved after human patient simulation (t=9; df=56; p<0.001) and ward-based exercises (t=9.3; df=76; p<0.001). Thematic analysis of interview data showed learning in teamwork and patient care. Participants thought that simulation had been beneficial for team practices such as calling for help and verbalising concerns and for improved interaction with patients. Areas to address in future include widening participation across multi-disciplinary teams, enhancing post-training support and exploring further which aspects of the programme enhance compassion and care of older persons. The study demonstrated that simulation is an effective method for encouraging dignified care and compassion for older persons by teaching team skills and empathetic and sensitive communication with patients and relatives.

  2. Advanced computations in plasma physics

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2002-05-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  3. Cost-effectiveness of simulation-based team training in obstetric emergencies (TOSTI study).

    PubMed

    van de Ven, J; van Baaren, G J; Fransen, A F; van Runnard Heimel, P J; Mol, B W; Oei, S G

    2017-09-01

    Team training is frequently applied in obstetrics. We aimed to evaluate the cost-effectiveness of obstetric multi-professional team training in a medical simulation centre. We performed a model-based cost-effectiveness analysis to evaluate four strategies for obstetric team training from a hospital perspective (no training, training without on-site repetition and training with 6 month or 3-6-9 month repetition). Data were retrieved from the TOSTI study, a randomised controlled trial evaluating team training in a medical simulation centre. We calculated the incremental cost-effectiveness ratio (ICER), which represent the costs to prevent the adverse outcome, here (1) the composite outcome of obstetric complications and (2) specifically neonatal trauma due to shoulder dystocia. Mean costs of a one-day multi-professional team training in a medical simulation centre were €25,546 to train all personnel of one hospital. A single training in a medical simulation centre was less effective and more costly compared to strategies that included repetition training. Compared to no training, the ICERs to prevent a composite outcome of obstetric complications were €3432 for a single repetition training course on-site six months after the initial training and €5115 for a three monthly repetition training course on-site after the initial training during one year. When we considered neonatal trauma due to shoulder dystocia, a three monthly repetition training course on-site after the initial training had an ICER of €22,878. Multi-professional team training in a medical simulation centre is cost-effective in a scenario where repetition training sessions are performed on-site. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Assessing uncertainties in crop and pasture ensemble model simulations of productivity and N2 O emissions.

    PubMed

    Ehrhardt, Fiona; Soussana, Jean-François; Bellocchi, Gianni; Grace, Peter; McAuliffe, Russel; Recous, Sylvie; Sándor, Renáta; Smith, Pete; Snow, Val; de Antoni Migliorati, Massimiliano; Basso, Bruno; Bhatia, Arti; Brilli, Lorenzo; Doltra, Jordi; Dorich, Christopher D; Doro, Luca; Fitton, Nuala; Giacomini, Sandro J; Grant, Brian; Harrison, Matthew T; Jones, Stephanie K; Kirschbaum, Miko U F; Klumpp, Katja; Laville, Patricia; Léonard, Joël; Liebig, Mark; Lieffering, Mark; Martin, Raphaël; Massad, Raia S; Meier, Elizabeth; Merbold, Lutz; Moore, Andrew D; Myrgiotis, Vasileios; Newton, Paul; Pattey, Elizabeth; Rolinski, Susanne; Sharp, Joanna; Smith, Ward N; Wu, Lianhai; Zhang, Qing

    2018-02-01

    Simulation models are extensively used to predict agricultural productivity and greenhouse gas emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multi-species agricultural contexts. We report an international model comparison and benchmarking exercise, showing the potential of multi-model ensembles to predict productivity and nitrous oxide (N 2 O) emissions for wheat, maize, rice and temperate grasslands. Using a multi-stage modelling protocol, from blind simulations (stage 1) to partial (stages 2-4) and full calibration (stage 5), 24 process-based biogeochemical models were assessed individually or as an ensemble against long-term experimental data from four temperate grassland and five arable crop rotation sites spanning four continents. Comparisons were performed by reference to the experimental uncertainties of observed yields and N 2 O emissions. Results showed that across sites and crop/grassland types, 23%-40% of the uncalibrated individual models were within two standard deviations (SD) of observed yields, while 42 (rice) to 96% (grasslands) of the models were within 1 SD of observed N 2 O emissions. At stage 1, ensembles formed by the three lowest prediction model errors predicted both yields and N 2 O emissions within experimental uncertainties for 44% and 33% of the crop and grassland growth cycles, respectively. Partial model calibration (stages 2-4) markedly reduced prediction errors of the full model ensemble E-median for crop grain yields (from 36% at stage 1 down to 4% on average) and grassland productivity (from 44% to 27%) and to a lesser and more variable extent for N 2 O emissions. Yield-scaled N 2 O emissions (N 2 O emissions divided by crop yields) were ranked accurately by three-model ensembles across crop species and field sites. The potential of using process-based model ensembles to predict jointly productivity and N 2 O emissions at field scale is discussed. © 2017 John Wiley & Sons Ltd.

  5. A robust computational technique for model order reduction of two-time-scale discrete systems via genetic algorithms.

    PubMed

    Alsmadi, Othman M K; Abo-Hammour, Zaer S

    2015-01-01

    A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  6. A multi-scale model for geared transmission aero-thermodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Sean M.

    A multi-scale, multi-physics computational tool for the simulation of high-per- formance gearbox aero-thermodynamics was developed and applied to equilibrium and pathological loss-of-lubrication performance simulation. The physical processes at play in these systems include multiphase compressible ow of the air and lubricant within the gearbox, meshing kinematics and tribology, as well as heat transfer by conduction, and free and forced convection. These physics are coupled across their representative space and time scales in the computational framework developed in this dissertation. These scales span eight orders of magnitude, from the thermal response of the full gearbox O(100 m; 10 2 s), through effects at the tooth passage time scale O(10-2 m; 10-4 s), down to tribological effects on the meshing gear teeth O(10-6 m; 10-6 s). Direct numerical simulation of these coupled physics and scales is intractable. Accordingly, a scale-segregated simulation strategy was developed by partitioning and treating the contributing physical mechanisms as sub-problems, each with associated space and time scales, and appropriate coupling mechanisms. These are: (1) the long time scale thermal response of the system, (2) the multiphase (air, droplets, and film) aerodynamic flow and convective heat transfer within the gearbox, (3) the high-frequency, time-periodic thermal effects of gear tooth heating while in mesh and its subsequent cooling through the rest of rotation, (4) meshing effects including tribology and contact mechanics. The overarching goal of this dissertation was to develop software and analysis procedures for gearbox loss-of-lubrication performance. To accommodate these four physical effects and their coupling, each is treated in the CFD code as a sub problem. These physics modules are coupled algorithmically. Specifically, the high- frequency conduction analysis derives its local heat transfer coefficient and near-wall air temperature boundary conditions from a quasi-steady cyclic-symmetric simulation of the internal flow. This high-frequency conduction solution is coupled directly with a model for the meshing friction, developed by a collaborator, which was adapted for use in a finite-volume CFD code. The local surface heat flux on solid surfaces is calculated by time-averaging the heat flux in the high-frequency analysis. This serves as a fixed-flux boundary condition in the long time scale conduction module. The temperature distribution from this long time scale heat transfer calculation serves as a boundary condition for the internal convection simulation, and as the initial condition for the high-frequency heat transfer module. Using this multi-scale model, simulations were performed for equilibrium and loss-of-lubrication operation of the NASA Glenn Research Center test stand. Results were compared with experimental measurements. In addition to the multi-scale model itself, several other specific contributions were made. Eulerian models for droplets and wall-films were developed and im- plemented in the CFD code. A novel approach to retaining liquid film on the solid surfaces, and strategies for its mass exchange with droplets, were developed and verified. Models for interfacial transfer between droplets and wall-film were implemented, and include the effects of droplet deposition, splashing, bouncing, as well as film breakup. These models were validated against airfoil data. To mitigate the observed slow convergence of CFD simulations of the enclosed aerodynamic flows within gearboxes, Fourier stability analysis was applied to the SIMPLE-C fractional-step algorithm. From this, recommendations to accelerate the convergence rate through enhanced pressure-velocity coupling were made. These were shown to be effective. A fast-running finite-volume reduced-order-model of the gearbox aero-thermo- dynamics was developed, and coupled with the tribology model to investigate the sensitivity of loss-of-lubrication predictions to various model and physical param- eters. This sensitivity study was instrumental in guiding efforts toward improving the accuracy of the multi-scale model without undue increase in computational cost. In addition, the reduced-order model is now used extensively by a collaborator in tribology model development and testing. Experimental measurements of high-speed gear windage in partially and fully- shrouded configurations were performed to supplement the paucity of available validation data. This measurement program provided measurements of windage loss for a gear of design-relevant size and operating speed, as well as guidance for increasing the accuracy of future measurements.

  7. Mechanical response of stainless steel subjected to biaxial load path changes: Cruciform experiments and multi-scale modeling

    DOE PAGES

    Upadhyay, Manas V.; Patra, Anirban; Wen, Wei; ...

    2018-05-08

    In this paper, we propose a multi-scale modeling approach that can simulate the microstructural and mechanical behavior of metal or alloy parts with complex geometries subjected to multi-axial load path changes. The model is used to understand the biaxial load path change behavior of 316L stainless steel cruciform samples. At the macroscale, a finite element approach is used to simulate the cruciform geometry and numerically predict the gauge stresses, which are difficult to obtain analytically. At each material point in the finite element mesh, the anisotropic viscoplastic self-consistent model is used to simulate the role of texture evolution on themore » mechanical response. At the single crystal level, a dislocation density based hardening law that appropriately captures the role of multi-axial load path changes on slip activity is used. The combined approach is experimentally validated using cruciform samples subjected to uniaxial load and unload followed by different biaxial reloads in the angular range [27º, 90º]. Polycrystalline yield surfaces before and after load path changes are generated using the full-field elasto-viscoplastic fast Fourier transform model to study the influence of the deformation history and reloading direction on the mechanical response, including the Bauschinger effect, of these cruciform samples. Results reveal that the Bauschinger effect is strongly dependent on the first loading direction and strain, intergranular and macroscopic residual stresses after first load, and the reloading angle. The microstructural origins of the mechanical response are discussed.« less

  8. Mechanical response of stainless steel subjected to biaxial load path changes: Cruciform experiments and multi-scale modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhyay, Manas V.; Patra, Anirban; Wen, Wei

    In this paper, we propose a multi-scale modeling approach that can simulate the microstructural and mechanical behavior of metal or alloy parts with complex geometries subjected to multi-axial load path changes. The model is used to understand the biaxial load path change behavior of 316L stainless steel cruciform samples. At the macroscale, a finite element approach is used to simulate the cruciform geometry and numerically predict the gauge stresses, which are difficult to obtain analytically. At each material point in the finite element mesh, the anisotropic viscoplastic self-consistent model is used to simulate the role of texture evolution on themore » mechanical response. At the single crystal level, a dislocation density based hardening law that appropriately captures the role of multi-axial load path changes on slip activity is used. The combined approach is experimentally validated using cruciform samples subjected to uniaxial load and unload followed by different biaxial reloads in the angular range [27º, 90º]. Polycrystalline yield surfaces before and after load path changes are generated using the full-field elasto-viscoplastic fast Fourier transform model to study the influence of the deformation history and reloading direction on the mechanical response, including the Bauschinger effect, of these cruciform samples. Results reveal that the Bauschinger effect is strongly dependent on the first loading direction and strain, intergranular and macroscopic residual stresses after first load, and the reloading angle. The microstructural origins of the mechanical response are discussed.« less

  9. Simulations of Tornadoes, Tropical Cyclones, MJOs, and QBOs, using GFDL's multi-scale global climate modeling system

    NASA Astrophysics Data System (ADS)

    Lin, Shian-Jiann; Harris, Lucas; Chen, Jan-Huey; Zhao, Ming

    2014-05-01

    A multi-scale High-Resolution Atmosphere Model (HiRAM) is being developed at NOAA/Geophysical Fluid Dynamics Laboratory. The model's dynamical framework is the non-hydrostatic extension of the vertically Lagrangian finite-volume dynamical core (Lin 2004, Monthly Wea. Rev.) constructed on a stretchable (via Schmidt transformation) cubed-sphere grid. Physical parametrizations originally designed for IPCC-type climate predictions are in the process of being modified and made more "scale-aware", in an effort to make the model suitable for multi-scale weather-climate applications, with horizontal resolution ranging from 1 km (near the target high-resolution region) to as low as 400 km (near the antipodal point). One of the main goals of this development is to enable simulation of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously thought impossible. We will present preliminary results, covering a very wide spectrum of temporal-spatial scales, ranging from simulation of tornado genesis (hours), Madden-Julian Oscillations (intra-seasonal), topical cyclones (seasonal), to Quasi Biennial Oscillations (intra-decadal), using the same global multi-scale modeling system.

  10. [Full-scale simulation in German medical schools and anesthesia residency programs : Status quo].

    PubMed

    Baschnegger, H; Meyer, O; Zech, A; Urban, B; Rall, M; Breuer, G; Prückner, S

    2017-01-01

    Simulation has been increasingly used in medicine. In 2003 German university departments of anesthesiology were provided with a full-scale patient simulator, designated for use with medical students. Meanwhile simulation courses are also offered to physicians and nurses. Currently, the national model curriculum for residency programs in anesthesiology is being revised, possibly to include mandatory simulation training. To assess the status quo of full-scale simulation training for medical school, residency and continuing medical education in German anesthesiology. All 38 German university chairs for anesthesiology as well as five arbitrarily chosen non-university facilities were invited to complete an online questionnaire regarding their centers' infrastructure and courses held between 2010 and 2012. The overall return rate was 86 %. In university simulation centers seven non-student staff members, mainly physicians, were involved, adding up to a full-time equivalent of 1.2. All hours of work were paid by 61 % of the centers. The median center size was 100 m 2 (range 20-500 m 2 ), equipped with three patient simulators (1-32). Simulators of high or very high fidelity are available at 80 % of the centers. Scripted scenarios were used by 91 %, video debriefing by 69 %. Of the participating university centers, 97 % offered courses for medical students, 81 % for the department's employees, 43 % for other departments of their hospital, and 61 % for external participants. In 2012 the median center reached 46 % of eligible students (0-100), 39 % of the department's physicians (8-96) and 16 % of its nurses (0-56) once. For physicians and nurses from these departments that equals one simulation-based training every 2.6 and 6 years, respectively. 31 % made simulation training mandatory for their residents, 29 % for their nurses and 24 % for their attending physicians. The overall rates of staff ever exposed to simulation were 45 % of residents (8-90), and 30 % each of nurses (10-80) and attendings (0-100). Including external courses the average center trained 59 (4-271) professionals overall in 2012. No clear trend could be observed over the three years polled. The results for the non-university centers were comparable. Important first steps have been taken to implement full-scale simulation in Germany. In addition to programs for medical students courses for physicians and nurses are available today. To reach everyone clinically involved in German anesthesiology on a regular basis the current capacities need to be dramatically increased. The basis for that to happen will be new concepts for funding, possibly supported by external requirements such as the national model curriculum for residency in anesthesiology.

  11. Multi-Scale Characterization of Orthotropic Microstructures

    DTIC Science & Technology

    2008-04-01

    D. Valiveti, S. J. Harris, J. Boileau, A domain partitioning based pre-processor for multi-scale modelling of cast aluminium alloys , Modelling and...SUPPLEMENTARY NOTES Journal article submitted to Modeling and Simulation in Materials Science and Engineering. PAO Case Number: WPAFB 08-3362...element for charac- terization or simulation to avoid misleading predictions of macroscopic defor- mation, fracture, or transport behavior. Likewise

  12. Comparison of sub-scaled to full-scaled aircrafts in simulation environment for air traffic management

    NASA Astrophysics Data System (ADS)

    Elbakary, Mohamed I.; Iftekharuddin, Khan M.; Papelis, Yiannis; Newman, Brett

    2017-05-01

    Air Traffic Management (ATM) concepts are commonly tested in simulation to obtain preliminary results and validate the concepts before adoption. Recently, the researchers found that simulation is not enough because of complexity associated with ATM concepts. In other words, full-scale tests must eventually take place to provide compelling performance evidence before adopting full implementation. Testing using full-scale aircraft produces a high-cost approach that yields high-confidence results but simulation provides a low-risk/low-cost approach with reduced confidence on the results. One possible approach to increase the confidence of the results and simultaneously reduce the risk and the cost is using unmanned sub-scale aircraft in testing new concepts for ATM. This paper presents the simulation results of using unmanned sub-scale aircraft in implementing ATM concepts compared to the full scale aircraft. The results of simulation show that the performance of sub-scale is quite comparable to that of the full-scale which validates use of the sub-scale in testing new ATM concepts. Keywords: Unmanned

  13. The Effect of Lateral Boundary Values on Atmospheric Mercury Simulations with the CMAQ Model

    EPA Science Inventory

    Simulation results from three global-scale models of atmospheric mercury have been used to define three sets of initial condition and boundary condition (IC/BC) data for regional-scale model simulations over North America using the Community Multi-scale Air Quality (CMAQ) model. ...

  14. A Portrait of Non-Tenure-Track Faculty in Technical and Professional Communication: Results of a Pilot Study

    ERIC Educational Resources Information Center

    Meloncon, Lisa; England, Peter; Ilyasova, Alex

    2016-01-01

    We report the results of a pilot study that offers the field of technical and professional communication its first look at material working conditions of contingent faculty, such as course loads, compensation, and professional support. Findings include that contingent faculty are more enduring with stable full-time, multi-year contracts; they…

  15. Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis

    NASA Technical Reports Server (NTRS)

    Olevsky, Eugene; German, Randall M.

    2012-01-01

    A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.

  16. Evaluation of Drogue Parachute Damping Effects Utilizing the Apollo Legacy Parachute Model

    NASA Technical Reports Server (NTRS)

    Currin, Kelly M.; Gamble, Joe D.; Matz, Daniel A.; Bretz, David R.

    2011-01-01

    Drogue parachute damping is required to dampen the Orion Multi Purpose Crew Vehicle (MPCV) crew module (CM) oscillations prior to deployment of the main parachutes. During the Apollo program, drogue parachute damping was modeled on the premise that the drogue parachute force vector aligns with the resultant velocity of the parachute attach point on the CM. Equivalent Cm(sub q) and Cm(sub alpha) equations for drogue parachute damping resulting from the Apollo legacy parachute damping model premise have recently been developed. The MPCV computer simulations ANTARES and Osiris have implemented high fidelity two-body parachute damping models. However, high-fidelity model-based damping motion predictions do not match the damping observed during wind tunnel and full-scale free-flight oscillatory motion. This paper will present the methodology for comparing and contrasting the Apollo legacy parachute damping model with full-scale free-flight oscillatory motion. The analysis shows an agreement between the Apollo legacy parachute damping model and full-scale free-flight oscillatory motion.

  17. Efficacy of simulation-based trauma team training of non-technical skills. A systematic review.

    PubMed

    Gjeraa, K; Møller, T P; Østergaard, D

    2014-08-01

    Trauma resuscitation is a complex situation, and most organisations have multi-professional trauma teams. Non-technical skills are challenged during trauma resuscitation, and they play an important role in the prevention of critical incidents. Simulation-based training of these is recommended. Our research question was: Does simulation-based trauma team training of non-technical skills have effect on reaction, learning, behaviour or patient outcome? The authors searched PubMed, EMBASE and the Cochrane Library and found 13 studies eligible for analysis. We described and compared the educational interventions and the evaluations of effect according to the four Kirkpatrick levels: reaction, learning (knowledge, skills, attitudes), behaviour (in a clinical setting) and patient outcome. No studies were randomised, controlled and blinded, resulting in a moderate to high risk of bias. The multi-professional trauma teams had positive reactions to simulation-based training of non-technical skills. Knowledge and skills improved in all studies evaluating the effect on learning. Three studies found improvements in team performance (behaviour) in the clinical setting. One of these found difficulties in maintaining these skills. Two studies evaluated on patient outcome, of which none showed improvements in mortality, complication rate or duration of hospitalisation. A significant effect on learning was found after simulation-based training of the multi-professional trauma team in non-technical skills. Three studies demonstrated significantly increased clinical team performance. No effect on patient outcome was found. All studies had a moderate to high risk of bias. More comprehensive randomised studies are needed to evaluate the effect on patient outcome. © 2014 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  18. Challenge toward the prediction of typhoon behaviour and down pour

    NASA Astrophysics Data System (ADS)

    Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.

    2013-08-01

    Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.

  19. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  20. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    NASA Astrophysics Data System (ADS)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  1. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to use of the multi-satellite simulator tqimproy precipitation processes will be discussed.

  2. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei--Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2010-01-01

    In recent years, exponentially increasing computer power extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 sq km in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale models can be run in grid size similar to cloud resolving models through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model). (2) a regional scale model (a NASA unified weather research and forecast, W8F). (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling systems to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use the multi-satellite simulator to improve precipitation processes will be discussed.

  3. Using Multi-Scale Modeling Systems to Study the Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2010-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  4. Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models

    NASA Astrophysics Data System (ADS)

    Candy, Adam S.; Pietrzak, Julie D.

    2018-01-01

    The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.

  5. Designing and implementing a skills program Using a clinically integrated, multi-professional approach: Using evaluation to drive curriculum change

    PubMed Central

    Carr, Sandra E.; Celenza, Antonio; Lake, Fiona

    2009-01-01

    The essential procedural skills that newly graduated doctors require are rarely defined, do not take into account pre-vocational employer expectations, and differ between Universities. This paper describes how one Faculty used local evaluation data to drive curriculum change and implement a clinically integrated, multi-professional skills program. A curriculum restructure included a review of all undergraduate procedural skills training by academic staff and clinical departments, resulting in a curriculum skills map. Undergraduate training was then linked with postgraduate expectations using the Delphi process to identify the skills requiring structured standardised training. The skills program was designed and implemented without a dedicated simulation center. This paper shows the benefits of an alternate model in which clinical integration of training and multi-professional collaboration encouraged broad ownership of a program and, in turn, impacted the clinical experience obtained. PMID:20165528

  6. Thermo-Oxidative Induced Damage in Polymer Composites: Microstructure Image-Based Multi-Scale Modeling and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Hussein, Rafid M.; Chandrashekhara, K.

    2017-11-01

    A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.

  7. Full multi grid method for electric field computation in point-to-plane streamer discharge in air at atmospheric pressure

    NASA Astrophysics Data System (ADS)

    Kacem, S.; Eichwald, O.; Ducasse, O.; Renon, N.; Yousfi, M.; Charrada, K.

    2012-01-01

    Streamers dynamics are characterized by the fast propagation of ionized shock waves at the nanosecond scale under very sharp space charge variations. The streamer dynamics modelling needs the solution of charged particle transport equations coupled to the elliptic Poisson's equation. The latter has to be solved at each time step of the streamers evolution in order to follow the propagation of the resulting space charge electric field. In the present paper, a full multi grid (FMG) and a multi grid (MG) methods have been adapted to solve Poisson's equation for streamer discharge simulations between asymmetric electrodes. The validity of the FMG method for the computation of the potential field is first shown by performing direct comparisons with analytic solution of the Laplacian potential in the case of a point-to-plane geometry. The efficiency of the method is also compared with the classical successive over relaxation method (SOR) and MUltifrontal massively parallel solver (MUMPS). MG method is then applied in the case of the simulation of positive streamer propagation and its efficiency is evaluated from comparisons to SOR and MUMPS methods in the chosen point-to-plane configuration. Very good agreements are obtained between the three methods for all electro-hydrodynamics characteristics of the streamer during its propagation in the inter-electrode gap. However in the case of MG method, the computational time to solve the Poisson's equation is at least 2 times faster in our simulation conditions.

  8. Simulating the Multi-Disciplinary Care Team Approach: Enhancing Student Understanding of Anatomy through an Ultrasound-Anchored Interprofessional Session

    ERIC Educational Resources Information Center

    Luetmer, Marianne T.; Cloud, Beth A.; Youdas, James W.; Pawlina, Wojciech; Lachman, Nirusha

    2018-01-01

    Quality of healthcare delivery is dependent on collaboration between professional disciplines. Integrating opportunities for interprofessional learning in health science education programs prepares future clinicians to function as effective members of a multi-disciplinary care team. This study aimed to create a modified team-based learning (TBL)…

  9. Exploring a multi-scale method for molecular simulation in continuum solvent model: Explicit simulation of continuum solvent as an incompressible fluid.

    PubMed

    Xiao, Li; Luo, Ray

    2017-12-07

    We explored a multi-scale algorithm for the Poisson-Boltzmann continuum solvent model for more robust simulations of biomolecules. In this method, the continuum solvent/solute interface is explicitly simulated with a numerical fluid dynamics procedure, which is tightly coupled to the solute molecular dynamics simulation. There are multiple benefits to adopt such a strategy as presented below. At this stage of the development, only nonelectrostatic interactions, i.e., van der Waals and hydrophobic interactions, are included in the algorithm to assess the quality of the solvent-solute interface generated by the new method. Nevertheless, numerical challenges exist in accurately interpolating the highly nonlinear van der Waals term when solving the finite-difference fluid dynamics equations. We were able to bypass the challenge rigorously by merging the van der Waals potential and pressure together when solving the fluid dynamics equations and by considering its contribution in the free-boundary condition analytically. The multi-scale simulation method was first validated by reproducing the solute-solvent interface of a single atom with analytical solution. Next, we performed the relaxation simulation of a restrained symmetrical monomer and observed a symmetrical solvent interface at equilibrium with detailed surface features resembling those found on the solvent excluded surface. Four typical small molecular complexes were then tested, both volume and force balancing analyses showing that these simple complexes can reach equilibrium within the simulation time window. Finally, we studied the quality of the multi-scale solute-solvent interfaces for the four tested dimer complexes and found that they agree well with the boundaries as sampled in the explicit water simulations.

  10. Reliability of Multi-Category Rating Scales

    ERIC Educational Resources Information Center

    Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.

    2013-01-01

    The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…

  11. CFD analysis of a full-scale ceramic kiln module under actual operating conditions

    NASA Astrophysics Data System (ADS)

    Milani, Massimo; Montorsi, Luca; Stefani, Matteo; Venturelli, Matteo

    2017-11-01

    The paper focuses on the CFD analysis of a full-scale module of an industrial ceramic kiln under actual operating conditions. The multi-dimensional analysis includes the real geometry of a ceramic kiln module employed in the preheating and firing sections and investigates the heat transfer between the tiles and the burners' flame as well as the many components that comprise the module. Particular attention is devoted to the simulation of the convective flow field in the upper and lower chambers and to the effects of radiation on the different materials is addressed. The assessment of the radiation contribution to the tiles temperature is paramount to the improvement of the performance of the kiln in terms of energy efficiency and fuel consumption. The CFD analysis is combined to a lumped and distributed parameter model of the entire kiln in order to simulate the module behaviour at the boundaries under actual operating conditions. Finally, the CFD simulation is employed to address the effects of the module operating conditions on the tiles' temperature distribution in order to improve the temperature uniformity as well as to enhance the energy efficiency of the system and thus to reduce the fuel consumption.

  12. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.; hide

    2008-01-01

    Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.

  13. 49 CFR 239.105 - Debriefing and critique.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...

  14. 49 CFR 239.105 - Debriefing and critique.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...

  15. 49 CFR 239.105 - Debriefing and critique.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...

  16. 49 CFR 239.105 - Debriefing and critique.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...

  17. Data fusion of multi-scale representations for structural damage detection

    NASA Astrophysics Data System (ADS)

    Guo, Tian; Xu, Zili

    2018-01-01

    Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.

  18. Development of high-resolution multi-scale modelling system for simulation of coastal-fluvial urban flooding

    NASA Astrophysics Data System (ADS)

    Comer, Joanne; Indiana Olbert, Agnieszka; Nash, Stephen; Hartnett, Michael

    2017-02-01

    Urban developments in coastal zones are often exposed to natural hazards such as flooding. In this research, a state-of-the-art, multi-scale nested flood (MSN_Flood) model is applied to simulate complex coastal-fluvial urban flooding due to combined effects of tides, surges and river discharges. Cork city on Ireland's southwest coast is a study case. The flood modelling system comprises a cascade of four dynamically linked models that resolve the hydrodynamics of Cork Harbour and/or its sub-region at four scales: 90, 30, 6 and 2 m. Results demonstrate that the internalization of the nested boundary through the use of ghost cells combined with a tailored adaptive interpolation technique creates a highly dynamic moving boundary that permits flooding and drying of the nested boundary. This novel feature of MSN_Flood provides a high degree of choice regarding the location of the boundaries to the nested domain and therefore flexibility in model application. The nested MSN_Flood model through dynamic downscaling facilitates significant improvements in accuracy of model output without incurring the computational expense of high spatial resolution over the entire model domain. The urban flood model provides full characteristics of water levels and flow regimes necessary for flood hazard identification and flood risk assessment.

  19. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth.

    PubMed

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.

  20. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    NASA Astrophysics Data System (ADS)

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.

  1. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    PubMed

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  2. VLBI-resolution radio-map algorithms: Performance analysis of different levels of data-sharing on multi-socket, multi-core architectures

    NASA Astrophysics Data System (ADS)

    Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.

    2012-09-01

    A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.

  3. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  4. The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staebler, G. M.; Candy, J.; Howard, N. T.

    2016-06-15

    The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.« less

  5. Multi-scale simulations of space problems with iPIC3D

    NASA Astrophysics Data System (ADS)

    Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano

    The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038

  6. Blood Flow: Multi-scale Modeling and Visualization (July 2011)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-01-01

    Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations. This animation presents early results of two studies used in the development of a multi-scale visualization methodology. The fisrt illustrates a flow of healthy (red) and diseased (blue) blood cells with a Dissipative Particle Dynamics (DPD) method. Each bloodmore » cell is represented by a mesh, small spheres show a sub-set of particles representing the blood plasma, while instantaneous streamlines and slices represent the ensemble average velocity. In the second we investigate the process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of an aneruysm. Simulation was performed on Kraken at the National Institute for Computational Sciences. Visualization was produced using resources of the Argonne Leadership Computing Facility at Argonne National Laboratory.« less

  7. The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence

    DOE PAGES

    Staebler, Gary M.; Candy, John; Howard, Nathan T.; ...

    2016-06-29

    The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. Finally, the zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ionscale gyrokinetic simulations.« less

  8. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations

    PubMed Central

    Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-01-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911

  9. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    PubMed

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  10. TOPICAL REVIEW: Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.; Chan, V. S.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  11. Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  12. Multi-innovation auto-constructed least squares identification for 4 DOF ship manoeuvring modelling with full-scale trial data.

    PubMed

    Zhang, Guoqing; Zhang, Xianku; Pang, Hongshuai

    2015-09-01

    This research is concerned with the problem of 4 degrees of freedom (DOF) ship manoeuvring identification modelling with the full-scale trial data. To avoid the multi-innovation matrix inversion in the conventional multi-innovation least squares (MILS) algorithm, a new transformed multi-innovation least squares (TMILS) algorithm is first developed by virtue of the coupling identification concept. And much effort is made to guarantee the uniformly ultimate convergence. Furthermore, the auto-constructed TMILS scheme is derived for the ship manoeuvring motion identification by combination with a statistic index. Comparing with the existing results, the proposed scheme has the significant computational advantage and is able to estimate the model structure. The illustrative examples demonstrate the effectiveness of the proposed algorithm, especially including the identification application with full-scale trial data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Multi-filter spectrophotometry simulations

    NASA Technical Reports Server (NTRS)

    Callaghan, Kim A. S.; Gibson, Brad K.; Hickson, Paul

    1993-01-01

    To complement both the multi-filter observations of quasar environments described in these proceedings, as well as the proposed UBC 2.7 m Liquid Mirror Telescope (LMT) redshift survey, we have initiated a program of simulated multi-filter spectrophotometry. The goal of this work, still very much in progress, is a better quantitative assessment of the multiband technique as a viable mechanism for obtaining useful redshift and morphological class information from large scale multi-filter surveys.

  14. The development of estimated methodology for interfacial adhesion of semiconductor coatings having an enormous mismatch extent

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Chun; Huang, Pei-Chen

    2018-05-01

    The long-term reliability of multi-stacked coatings suffering the bending or rolling load was a severe challenge to extend the lifespan of foregoing structure. In addition, the adhesive strength of dissimilar materials was regarded as the major mechanical reliability concerns among multi-stacked films. However, the significant scale-mismatch from several nano-meter to micro-meter among the multi-stacked coatings causing the numerical accuracy and converged capability issues on fracture-based simulation approach. For those reasons, this study proposed the FEA-based multi-level submodeling and multi-point constraint (MPC) technique to conquer the foregoing scale-mismatch issue. The results indicated that the decent region of first and second-order submodeling can achieve the small error of 1.27% compared with the experimental result and significantly reduced the mesh density and computing time. Moreover, the MPC method adopted in FEA simulation also shown only 0.54% error when the boundary of selected local region was away the concerned critical region following the Saint-Venant principle. In this investigation, two FEA-based approaches were used to conquer the evidently scale mismatch issue when the adhesive strengths of micro and nano-scale multi-stacked coating were taken into account.

  15. Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai

    2013-04-01

    The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.

  16. Laser-plasma interactions for fast ignition

    NASA Astrophysics Data System (ADS)

    Kemp, A. J.; Fiuza, F.; Debayle, A.; Johzaki, T.; Mori, W. B.; Patel, P. K.; Sentoku, Y.; Silva, L. O.

    2014-05-01

    In the electron-driven fast-ignition (FI) approach to inertial confinement fusion, petawatt laser pulses are required to generate MeV electrons that deposit several tens of kilojoules in the compressed core of an imploded DT shell. We review recent progress in the understanding of intense laser-plasma interactions (LPI) relevant to FI. Increases in computational and modelling capabilities, as well as algorithmic developments have led to enhancement in our ability to perform multi-dimensional particle-in-cell simulations of LPI at relevant scales. We discuss the physics of the interaction in terms of laser absorption fraction, the laser-generated electron spectra, divergence, and their temporal evolution. Scaling with irradiation conditions such as laser intensity are considered, as well as the dependence on plasma parameters. Different numerical modelling approaches and configurations are addressed, providing an overview of the modelling capabilities and limitations. In addition, we discuss the comparison of simulation results with experimental observables. In particular, we address the question of surrogacy of today's experiments for the full-scale FI problem.

  17. Simulation-based team training for multi-professional obstetric care teams to improve patient outcome: a multicentre, cluster randomised controlled trial.

    PubMed

    Fransen, A F; van de Ven, J; Schuit, E; van Tetering, Aac; Mol, B W; Oei, S G

    2017-03-01

    To investigate whether simulation-based obstetric team training in a simulation centre improves patient outcome. Multicentre, open, cluster randomised controlled trial. Obstetric units in the Netherlands. Women with a singleton pregnancy beyond 24 weeks of gestation. Random allocation of obstetric units to a 1-day, multi-professional, simulation-based team training focusing on crew resource management (CRM) in a simulation centre or to no such team training. Intention-to-treat analyses were performed at the cluster level, including a measurement 1 year prior to the intervention. Primary outcome was a composite outcome of obstetric complications during the first year post-intervention, including low Apgar score, severe postpartum haemorrhage, trauma due to shoulder dystocia, eclampsia and hypoxic-ischaemic encephalopathy. Maternal and perinatal mortality were also registered. Each study group included 12 units with a median unit size of 1224 women, combining for a total of 28 657 women. In total, 471 medical professionals received the training course. The composite outcome of obstetric complications did not differ between study groups [odds ratio (OR) 1.0, 95% confidence interval (CI) 0.80-1.3]. Team training reduced trauma due to shoulder dystocia (OR 0.50, 95% CI 0.25-0.99) and increased invasive treatment for severe postpartum haemorrhage (OR 2.2, 95% CI 1.2-3.9) compared with no intervention. Other outcomes did not differ between study groups. A 1-day, off-site, simulation-based team training, focusing on teamwork skills, did not reduce a composite of obstetric complications. 1-day, off-site, simulation-based team training did not reduce a composite of obstetric complications. © 2016 Royal College of Obstetricians and Gynaecologists.

  18. A Multi-Scale Method for Dynamics Simulation in Continuum Solvent Models I: Finite-Difference Algorithm for Navier-Stokes Equation.

    PubMed

    Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2014-11-25

    A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.

  19. Computer Aided Battery Engineering Consortium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pesaran, Ahmad

    A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less

  20. Full-scale evaluation of a multi-component additive for efficient control of activated sludge filamentous bulking.

    PubMed

    Seka, M A; Van DeWiele, T; Verstraete, W

    2002-01-01

    A multi-component additive formulated for a more efficient control of activated sludge filamentous bulking was evaluated at a full-scale treatment plant experiencing severe filamentous bulking. It was found that, besides offering an immediate improvement of sludge settling, the multi-component additive was able to eliminate the filamentous bacteria causing the bulking. Hence, contrary to ordinary additives, this novel additive yielded immediate as well as long-term improvements in sludge sedimentation upon a few additions. Preliminary lab-scale toxicity tests showed that the treatment of the sludge by the additive should not impart any toxicity to the resulting effluent.

  1. Goal-oriented robot navigation learning using a multi-scale space representation.

    PubMed

    Llofriu, M; Tejera, G; Contreras, M; Pelc, T; Fellous, J M; Weitzenfeld, A

    2015-12-01

    There has been extensive research in recent years on the multi-scale nature of hippocampal place cells and entorhinal grid cells encoding which led to many speculations on their role in spatial cognition. In this paper we focus on the multi-scale nature of place cells and how they contribute to faster learning during goal-oriented navigation when compared to a spatial cognition system composed of single scale place cells. The task consists of a circular arena with a fixed goal location, in which a robot is trained to find the shortest path to the goal after a number of learning trials. Synaptic connections are modified using a reinforcement learning paradigm adapted to the place cells multi-scale architecture. The model is evaluated in both simulation and physical robots. We find that larger scale and combined multi-scale representations favor goal-oriented navigation task learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Training the elderly in pedestrian safety: Transfer effect between two virtual reality simulation devices.

    PubMed

    Maillot, Pauline; Dommes, Aurélie; Dang, Nguyen-Thong; Vienne, Fabrice

    2017-02-01

    A virtual-reality training program has been developed to help older pedestrians make safer street-crossing decisions in two-way traffic situations. The aim was to develop a small-scale affordable and transportable simulation device that allowed transferring effects to a full-scale device involving actual walking. 20 younger adults and 40 older participants first participated in a pre-test phase to assess their street crossings using both full-scale and small-scale simulation devices. Then, a trained older group (20 participants) completed two 1.5-h training sessions with the small-scale device, whereas an older control group received no training (19 participants). Thereafter, the 39 older trained and untrained participants took part in a 1.5-h post-test phase again with both devices. Pre-test phase results suggested significant differences between both devices in the group of older participants only. Unlike younger participants, older participants accepted more often to cross and had more collisions on the small-scale simulation device than on the full-scale one. Post-test phase results showed that training older participants on the small-scale device allowed a significant global decrease in the percentage of accepted crossings and collisions on both simulation devices. But specific improvements regarding the way participants took into account the speed of approaching cars and vehicles in the far lane were notable only on the full-scale simulation device. The findings suggest that the small-scale simulation device triggers a greater number of unsafe decisions compared to a full-scale one that allows actual crossings. But findings reveal that such a small-scale simulation device could be a good means to improve the safety of street-crossing decisions and behaviors among older pedestrians, suggesting a transfer of learning effect between the two simulation devices, from training people with a miniature device to measuring their specific progress with a full-scale one. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Relationships between Organizations and Publics: Development of a Multi-Dimensional Organization-Public Relationship Scale.

    ERIC Educational Resources Information Center

    Bruning, Stephen D.; Ledingham, John A.

    1999-01-01

    Attempts to design a multiple-item, multiple-dimension organization/public relationship scale. Finds that organizations and key publics have three types of relationships: professional, personal, and community. Provides an instrument that can be used to measure the influence that perceptions of the organization/public relationship have on consumer…

  4. Multi-Dimensional Full Boltzmann-Neutrino-Radiation Hydrodynamic Simulations and Their Detailed Comparisons with Monte-Carlo Methods in Core Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Nagakura, H.; Richers, S.; Ott, C. D.; Iwakami, W.; Furusawa, S.; Sumiyoshi, K.; Yamada, S.; Matsufuru, H.; Imakura, A.

    2016-10-01

    We have developed a 7-dimensional Full Boltzmann-neutrino-radiation-hydrodynamical code and carried out ab-initio axisymmetric CCSNe simulations. I will talk about main results of our simulations and also discuss current ongoing projects.

  5. Three-dimensional Dendritic Needle Network model with application to Al-Cu directional solidification experiments

    NASA Astrophysics Data System (ADS)

    Tourret, D.; Karma, A.; Clarke, A. J.; Gibbs, P. J.; Imhoff, S. D.

    2015-06-01

    We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulations and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.

  6. 49 CFR 239.105 - Debriefing and critique.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. To the extent practicable, all on-board...-scale simulation shall participate in the session either: (1) In person; (2) Offsite via teleconference...

  7. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    PubMed

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  8. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  9. Atomistic- and Meso-Scale Computational Simulations for Developing Multi-Timescale Theory for Radiation Degradation in Electronic and Optoelectronic Devices

    DTIC Science & Technology

    2017-02-13

    3550 Aberdeen Ave., SE 11. SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER(S) AFRL -RV-PS-TR-2016-0161 12. DISTRIBUTION / AVAILABILITY...RVIL Kirtland AFB, NM 87117-5776 2 cys Official Record Copy AFRL /RVSW/David Cardimona 1 cy 22 Approved for public release; distribution is unlimited. ... AFRL -RV-PS- AFRL -RV-PS- TR-2016-0161 TR-2016-0161 ATOMISTIC- AND MESO-SCALE COMPUTATIONAL SIMULATIONS FOR DEVELOPING MULTI-TIMESCALE THEORY FOR

  10. 3D ion-scale dynamics of BBFs and their associated emissions in Earth's magnetotail using 3D hybrid simulations and MMS multi-spacecraft observations

    NASA Astrophysics Data System (ADS)

    Breuillard, H.; Aunai, N.; Le Contel, O.; Catapano, F.; Alexandrova, A.; Retino, A.; Cozzani, G.; Gershman, D. J.; Giles, B. L.; Khotyaintsev, Y. V.; Lindqvist, P. A.; Ergun, R.; Strangeway, R. J.; Russell, C. T.; Magnes, W.; Plaschke, F.; Nakamura, R.; Fuselier, S. A.; Turner, D. L.; Schwartz, S. J.; Torbert, R. B.; Burch, J.

    2017-12-01

    Transient and localized jets of hot plasma, also known as Bursty Bulk Flows (BBFs), play a crucial role in Earth's magnetotail dynamics because the energy input from the solar wind is partly dissipated in their vicinity, notably in their embedded dipolarization front (DF). This dissipation is in the form of strong low-frequency waves that can heat and accelerate energetic particles up to the high-latitude plasma sheet. The ion-scale dynamics of BBFs have been revealed by the Cluster and THEMIS multi-spacecraft missions. However, the dynamics of BBF propagation in the magnetotail are still under debate due to instrumental limitations and spacecraft separation distances, as well as simulation limitations. The NASA/MMS fleet, which features unprecedented high time resolution instruments and four spacecraft separated by kinetic-scale distances, has also shown recently that the DF normal dynamics and its associated emissions are below the ion gyroradius scale in this region. Large variations in the dawn-dusk direction were also observed. However, most of large-scale simulations are using the MHD approach and are assumed 2D in the XZ plane. Thus, in this study we take advantage of both multi-spacecraft observations by MMS and large-scale 3D hybrid simulations to investigate the 3D dynamics of BBFs and their associated emissions at ion-scale in Earth's magnetotail, and their impact on particle heating and acceleration.

  11. Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.

    2004-01-01

    Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.

  12. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE PAGES

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    2015-12-07

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  13. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  14. "See One, Sim One, Do One"- A National Pre-Internship Boot-Camp to Ensure a Safer "Student to Doctor" Transition.

    PubMed

    Minha, Sa'ar; Shefet, Daphna; Sagi, Doron; Berkenstadt, Haim; Ziv, Amitai

    2016-01-01

    The transition for being a medical student to a full functioning intern is accompanied by considerable stress and sense of unpreparedness. Simulation based workshops were previously reported to be effective in improving the readiness of interns and residents to their daily needed skills but only few programs were implemented on a large scale. A nationally endorsed and mandated pre-internship simulation based workshop is reported. We hypothesized that this intervention will have a meaningful and sustained impact on trainees' perception of their readiness to internship with regard to patient safety and quality of care skills. Main outcome measure was the workshop's contribution to professional training in general and to critical skills and error prevention in particular, as perceived by participants. Between 2004 and 2011, 85 workshops were conducted for a total of 4,172 trainees. Eight-hundred and six of the 2,700 participants approached by e-mail, returned feedback evaluation forms, which were analyzed. Eighty five percent of trainees perceived the workshop as an essential component of their professional training, and 87% agreed it should be mandatory. These ratings peaked during internship and were generally sustained 3 years following the workshop. Contribution to emergency care skills was especially highly ranked (83%). Implementation of a mandatory, simulation-based, pre-internship workshop on a national scale made a significant perceived impact on interns and residents. The sustained impact should encourage adopting this approach to facilitate the student to doctor transition.

  15. Teaching communication and supporting autonomy with a team-based operative simulator.

    PubMed

    Cook, Mackenzie R; Deal, Shanley B; Scott, Jessica M; Moren, Alexis M; Kiraly, Laszlo N

    2016-09-01

    Changing residency structure emphasizes the need for formal instruction on team leadership and intraoperative teaching skills. A high fidelity, multi-learner surgical simulation may offer opportunities for senior learners (SLs) to learn these skills while teaching technical skills to junior learners (JLs). We designed and optimized a low-cost inguinal hernia model that paired JLs and SLs as an operative team. This was tested in 3 pilot simulations. Participants' feedback was analyzed using qualitative methods. JL feedback to SLs included the themes "guiding and instructing" and "allowing autonomy." Senior Learner feedback to JLs focused on "mechanics," "knowledge," and "perspective/flow." Both groups focused on "communication" and "professionalism." A multi-learner simulation can successfully meet the technical learning needs of JLs and the teaching and communication learning needs of SLs. This model of resident-driven simulation may illustrate future opportunities for operative simulation. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    PubMed

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  17. SCALE-UP OF RAPID SMALL-SCALE ADSORPTION TESTS TO FIELD-SCALE ADSORBERS: THEORETICAL AND EXPERIMENTAL BASIS

    EPA Science Inventory

    Design of full-scale adsorption systems typically includes expensive and time-consuming pilot studies to simulate full-scale adsorber performance. Accordingly, the rapid small-scale column test (RSSCT) was developed and evaluated experimentally. The RSSCT can simulate months of f...

  18. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  19. Effectiveness of a poverty simulation in Second Life®: changing nursing student attitudes toward poor people.

    PubMed

    Menzel, Nancy; Willson, Laura Helen; Doolen, Jessica

    2014-03-11

    Social justice is a fundamental value of the nursing profession, challenging educators to instill this professional value when caring for the poor. This randomized controlled trial examined whether an interactive virtual poverty simulation created in Second Life® would improve nursing students' empathy with and attributions for people living in poverty, compared to a self-study module. We created a multi-user virtual environment populated with families and individual avatars that represented the demographics contributing to poverty and vulnerability. Participants (N = 51 baccalaureate nursing students) were randomly assigned to either Intervention or Control groups and completed the modified Attitudes toward Poverty Scale pre- and post-intervention. The 2.5-hour simulation was delivered three times over a 1-year period to students in successive community health nursing classes. The investigators conducted post-simulation debriefings following a script. While participants in the virtual poverty simulation developed significantly more favorable attitudes on five questions than the Control group, the total scores did not differ significantly. Whereas students readily learned how to navigate inside Second Life®, faculty facilitators required periodic coaching and guidance to be competent. While poverty simulations, whether virtual or face-to-face, have some ability to transform nursing student attitudes, faculty must incorporate social justice concepts throughout the curriculum to produce lasting change.

  20. Time and length scales within a fire and implications for numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TIESZEN,SHELDON R.

    2000-02-02

    A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less

  1. How well do the GCMs/RCMs capture the multi-scale temporal variability of precipitation in the Southwestern United States?

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Gautam, Mahesh R.; Zhu, Jianting; Yu, Zhongbo

    2013-02-01

    SummaryMulti-scale temporal variability of precipitation has an established relationship with floods and droughts. In this paper, we present the diagnostics on the ability of 16 General Circulation Models (GCMs) from Bias Corrected and Downscaled (BCSD) World Climate Research Program's (WCRP's) Coupled Model Inter-comparison Project Phase 3 (CMIP3) projections and 10 Regional Climate Models (RCMs) that participated in the North American Regional Climate Change Assessment Program (NARCCAP) to represent multi-scale temporal variability determined from the observed station data. Four regions (Los Angeles, Las Vegas, Tucson, and Cimarron) in the Southwest United States are selected as they represent four different precipitation regions classified by clustering method. We investigate how storm properties and seasonal, inter-annual, and decadal precipitation variabilities differed between GCMs/RCMs and observed records in these regions. We find that current GCMs/RCMs tend to simulate longer storm duration and lower storm intensity compared to those from observed records. Most GCMs/RCMs fail to produce the high-intensity summer storms caused by local convective heat transport associated with the summer monsoon. Both inter-annual and decadal bands are present in the GCM/RCM-simulated precipitation time series; however, these do not line up to the patterns of large-scale ocean oscillations such as El Nino/La Nina Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO). Our results show that the studied GCMs/RCMs can capture long-term monthly mean as the examined data is bias-corrected and downscaled, but fail to simulate the multi-scale precipitation variability including flood generating extreme events, which suggests their inadequacy for studies on floods and droughts that are strongly associated with multi-scale temporal precipitation variability.

  2. A Multi-Scale Method for Dynamics Simulation in Continuum Solvent Models I: Finite-Difference Algorithm for Navier-Stokes Equation

    PubMed Central

    Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2014-01-01

    A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761

  3. Cognitive and Psychomotor Entrustable Professional Activities: Can Simulators Help Assess Competency in Trainees?

    PubMed

    Dwyer, Tim; Wadey, Veronica; Archibald, Douglas; Kraemer, William; Shantz, Jesse Slade; Townley, John; Ogilvie-Harris, Darrell; Petrera, Massimo; Ferguson, Peter; Nousiainen, Markku

    2016-04-01

    An entrustable professional activity describes a professional task that postgraduate residents must master during their training. The use of simulation to assess performance of entrustable professional activities requires further investigation. (1) Is simulation-based assessment of resident performance of entrustable professional activities reliable? (2) Is there evidence of important differences between Postgraduate Year (PGY)-1 and PGY-4 residents when performing simulated entrustable professional activities? Three entrustable professional activities were chosen from a list of competencies: management of the patient for total knee arthroplasty (TKA); management of the patient with an intertrochanteric hip fracture; and management of the patient with an ankle fracture. Each assessment of entrustable professional activity was 40 minutes long with three components: preoperative management of a patient (history-taking, examination, image interpretation); performance of a technical procedure on a sawbones model; and postoperative management of a patient (postoperative orders, management of complications). Residents were assessed by six faculty members who used checklists based on a modified Delphi technique, an overall global rating scale as well as a previously validated global rating scale for the technical procedure component of each activity. Nine PGY-1 and nine PGY-4 residents participated in our simulated assessment. We assessed reliability by calculating the internal consistency of the mean global rating for each activity as well as the interrater reliability between the faculty assessment and blinded review of videotaped encounters. We sought evidence of a difference in performance between PGY-1 and PGY-4 residents on the overall global rating scale for each station of each entrustable professional activity. The reliability (Cronbach's α) for the hip fracture activity was 0.88, it was 0.89 for the ankle fracture activity, and it was 0.84 for the TKA activity. A strong correlation was seen between blinded observer video review and faculty scores (mean 0.87 [0.07], p < 0.001). For the hip fracture entrustable professional activity, the PGY-4 group had a higher mean global rating scale than the PGY-1 group for preoperative management (3.56 [0.5] versus 2.33 [0.5], p < 0.001), postoperative management (3.67 [0.5] versus 2.22 [0.7], p < 0.001), and technical procedures (3.11 [0.3] versus 3.67 [0.5], p = 0.015). For the TKA activity, the PGY-4 group scored higher for postoperative management (3.5 [0.8] versus 2.67 [0.5], p = 0.016) and technical procedures (3.22 [0.9] versus 2.22 [0.9], p = 0.04) than the PGY-1 group, but no difference for preoperative management with the numbers available (PGY-4, 3.44 [0.7] versus PGY-1 2.89 [0.8], p = 0.14). For the ankle fracture activity, the PGY-4 group scored higher for postoperative management (3.22 [0.8] versus 2.33 [0.7], p = 0.18) and technical procedures (3.22 [1.2] versus 2.0 [0.7], p = 0.018) than the PGY-1 groups, but no difference for preoperative management with the numbers available (PGY-4, 3.22 [0.8] versus PGY-1, 2.78 [0.7], p = 0.23). The results of our study show that simulated assessment of entrustable professional activities may be used to determine the ability of a resident to perform professional tasks that are critical components of medical training. In this manner, educators can ensure that competent performance of these skills in the simulated setting occurs before actual practice with patients in the clinical setting.

  4. Computer simulations and real-time control of ELT AO systems using graphical processing units

    NASA Astrophysics Data System (ADS)

    Wang, Lianqi; Ellerbroek, Brent

    2012-07-01

    The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.

  5. Attitudes of Mental Health Workers and Psychologists Regarding the Usefulness of Mediated Learning Experience as a Supplement to Multi Systemic Treatment.

    ERIC Educational Resources Information Center

    Posy, Yosef

    This study compares the attitudes of two groups of professionals involved in adolescent drug and alcohol treatment regarding the usefulness of Mediated Learning Experience as a supplement to Multi Systemic Treatment (MST) for substance abuse. Fifteen social workers and 15 school psychologists completed a rating scale to record their opinions of…

  6. Scaling Fiber Lasers to Large Mode Area: An Investigation of Passive Mode-Locking Using a Multi-Mode Fiber

    PubMed Central

    Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W.

    2011-01-01

    The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg–Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers. PMID:21731106

  7. Scaling Fiber Lasers to Large Mode Area: An Investigation of Passive Mode-Locking Using a Multi-Mode Fiber.

    PubMed

    Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W

    2011-01-01

    The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg-Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers.

  8. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    NASA Astrophysics Data System (ADS)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  9. Coupling biomechanics to a cellular level model: an approach to patient-specific image driven multi-scale and multi-physics tumor simulation.

    PubMed

    May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe

    2011-10-01

    Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Scale Interactions in the Tropics from a Simple Multi-Cloud Model

    NASA Astrophysics Data System (ADS)

    Niu, X.; Biello, J. A.

    2017-12-01

    Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.

  11. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  12. A French validation study of the Coma Recovery Scale-Revised (CRS-R).

    PubMed

    Schnakers, Caroline; Majerus, Steve; Giacino, Joseph; Vanhaudenhuyse, Audrey; Bruno, Marie-Aurelie; Boly, Melanie; Moonen, Gustave; Damas, Pierre; Lambermont, Bernard; Lamy, Maurice; Damas, Francois; Ventura, Manfredi; Laureys, Steven

    2008-09-01

    The aim of the present study was to explore the concurrent validity, inter-rater agreement and diagnostic sensitivity of a French adaptation of the Coma Recovery Scale-Revised (CRS-R) as compared to other coma scales such as the Glasgow Coma Scale (GCS), the Full Outline of UnResponsiveness scale (FOUR) and the Wessex Head Injury Matrix (WHIM). Multi-centric prospective study. To test concurrent validity and diagnostic sensitivity, the four behavioural scales were administered in a randomized order in 77 vegetative and minimally conscious patients. Twenty-four clinicians with different professional backgrounds, levels of expertise and CRS-R experience were recruited to assess inter-rater agreement. Good concurrent validity was obtained between the CRS-R and the three other standardized behavioural scales. Inter-rater reliability for the CRS-R total score and sub-scores was good, indicating that the scale yields reproducible findings across examiners and does not appear to be systematically biased by profession, level of expertise or CRS-R experience. Finally, the CRS-R demonstrated a significantly higher sensitivity to detect MCS patients, as compared to the GCS, the FOUR and the WHIM. The results show that the French version of the CRS-R is a valid and sensitive scale which can be used in severely brain damaged patients by all members of the medical staff.

  13. A hierarchical pyramid method for managing large-scale high-resolution drainage networks extracted from DEM

    NASA Astrophysics Data System (ADS)

    Bai, Rui; Tiejian, Li; Huang, Yuefei; Jiaye, Li; Wang, Guangqian; Yin, Dongqin

    2015-12-01

    The increasing resolution of Digital Elevation Models (DEMs) and the development of drainage network extraction algorithms make it possible to develop high-resolution drainage networks for large river basins. These vector networks contain massive numbers of river reaches with associated geographical features, including topological connections and topographical parameters. These features create challenges for efficient map display and data management. Of particular interest are the requirements of data management for multi-scale hydrological simulations using multi-resolution river networks. In this paper, a hierarchical pyramid method is proposed, which generates coarsened vector drainage networks from the originals iteratively. The method is based on the Horton-Strahler's (H-S) order schema. At each coarsening step, the river reaches with the lowest H-S order are pruned, and their related sub-basins are merged. At the same time, the topological connections and topographical parameters of each coarsened drainage network are inherited from the former level using formulas that are presented in this study. The method was applied to the original drainage networks of a watershed in the Huangfuchuan River basin extracted from a 1-m-resolution airborne LiDAR DEM and applied to the full Yangtze River basin in China, which was extracted from a 30-m-resolution ASTER GDEM. In addition, a map-display and parameter-query web service was published for the Mississippi River basin, and its data were extracted from the 30-m-resolution ASTER GDEM. The results presented in this study indicate that the developed method can effectively manage and display massive amounts of drainage network data and can facilitate multi-scale hydrological simulations.

  14. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    NASA Astrophysics Data System (ADS)

    Kou, Jisheng; Sun, Shuyu

    2016-08-01

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.

  15. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Jisheng; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049

    2016-08-01

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng–Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from themore » microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young–Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young–Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young–Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.« less

  16. A full-spectrum analysis of high-speed train interior noise under multi-physical-field coupling excitations

    NASA Astrophysics Data System (ADS)

    Zheng, Xu; Hao, Zhiyong; Wang, Xu; Mao, Jie

    2016-06-01

    High-speed-railway-train interior noise at low, medium, and high frequencies could be simulated by finite element analysis (FEA) or boundary element analysis (BEA), hybrid finite element analysis-statistical energy analysis (FEA-SEA) and statistical energy analysis (SEA), respectively. First, a new method named statistical acoustic energy flow (SAEF) is proposed, which can be applied to the full-spectrum HST interior noise simulation (including low, medium, and high frequencies) with only one model. In an SAEF model, the corresponding multi-physical-field coupling excitations are firstly fully considered and coupled to excite the interior noise. The interior noise attenuated by sound insulation panels of carriage is simulated through modeling the inflow acoustic energy from the exterior excitations into the interior acoustic cavities. Rigid multi-body dynamics, fast multi-pole BEA, and large-eddy simulation with indirect boundary element analysis are first employed to extract the multi-physical-field excitations, which include the wheel-rail interaction forces/secondary suspension forces, the wheel-rail rolling noise, and aerodynamic noise, respectively. All the peak values and their frequency bands of the simulated acoustic excitations are validated with those from the noise source identification test. Besides, the measured equipment noise inside equipment compartment is used as one of the excitation sources which contribute to the interior noise. Second, a full-trimmed FE carriage model is firstly constructed, and the simulated modal shapes and frequencies agree well with the measured ones, which has validated the global FE carriage model as well as the local FE models of the aluminum alloy-trim composite panel. Thus, the sound transmission loss model of any composite panel has indirectly been validated. Finally, the SAEF model of the carriage is constructed based on the accurate FE model and stimulated by the multi-physical-field excitations. The results show that the trend of the simulated 1/3 octave band sound pressure spectrum agrees well with that of the on-site-measured one. The deviation between the simulated and measured overall sound pressure level (SPL) is 2.6 dB(A) and well controlled below the engineering tolerance limit, which has validated the SAEF model in the full-spectrum analysis of the high speed train interior noise.

  17. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  18. A Unified Multi-scale Model for Cross-Scale Evaluation and Integration of Hydrological and Biogeochemical Processes

    NASA Astrophysics Data System (ADS)

    Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.

    2013-12-01

    Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller scale simulations were then used to simulate coupled flow and moisture migration in soils in saturated and unsaturated zones, surface and groundwater exchange, and surface water flow in streams and lakes at the DWP site under dynamic precipitation conditions. Laboratory measurements of soil hydrological and biogeochemical properties are used to parameterize the UMSM at the small scales, and field measurements are used to evaluate the UMSM.

  19. Simulation of orographic effects with a Quasi-3-D Multiscale Modeling Framework: Basic algorithm and preliminary results

    DOE PAGES

    Jung, Joon -Hee

    2016-10-11

    Here, the global atmospheric models based on the Multi-scale Modeling Framework (MMF) are able to explicitly resolve subgrid-scale processes by using embedded 2-D Cloud-Resolving Models (CRMs). Up to now, however, those models do not include the orographic effects on the CRM grid scale. This study shows that the effects of CRM grid-scale orography can be simulated reasonably well by the Quasi-3-D MMF (Q3D MMF), which has been developed as a second-generation MMF. In the Q3D framework, the surface topography can be included in the CRM component by using a block representation of the mountains, so that no smoothing of themore » topographic height is necessary. To demonstrate the performance of such a model, the orographic effects over a steep mountain are simulated in an idealized experimental setup with each of the Q3D MMF and the full 3-D CRM. The latter is used as a benchmark. Comparison of the results shows that the Q3D MMF is able to reproduce the horizontal distribution of orographic precipitation and the flow changes around mountains as simulated by the 3-D CRM, even though the embedded CRMs of the Q3D MMF recognize only some aspects of the complex 3-D topography. It is also shown that the use of 3-D CRMs in the Q3D framework, rather than 2-D CRMs, has positive impacts on the simulation of wind fields but does not substantially change the simulated precipitation.« less

  20. Simulation of orographic effects with a Quasi-3-D Multiscale Modeling Framework: Basic algorithm and preliminary results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Joon -Hee

    Here, the global atmospheric models based on the Multi-scale Modeling Framework (MMF) are able to explicitly resolve subgrid-scale processes by using embedded 2-D Cloud-Resolving Models (CRMs). Up to now, however, those models do not include the orographic effects on the CRM grid scale. This study shows that the effects of CRM grid-scale orography can be simulated reasonably well by the Quasi-3-D MMF (Q3D MMF), which has been developed as a second-generation MMF. In the Q3D framework, the surface topography can be included in the CRM component by using a block representation of the mountains, so that no smoothing of themore » topographic height is necessary. To demonstrate the performance of such a model, the orographic effects over a steep mountain are simulated in an idealized experimental setup with each of the Q3D MMF and the full 3-D CRM. The latter is used as a benchmark. Comparison of the results shows that the Q3D MMF is able to reproduce the horizontal distribution of orographic precipitation and the flow changes around mountains as simulated by the 3-D CRM, even though the embedded CRMs of the Q3D MMF recognize only some aspects of the complex 3-D topography. It is also shown that the use of 3-D CRMs in the Q3D framework, rather than 2-D CRMs, has positive impacts on the simulation of wind fields but does not substantially change the simulated precipitation.« less

  1. Simulation of orographic effects with a Quasi-3-D Multiscale Modeling Framework: Basic algorithm and preliminary results

    NASA Astrophysics Data System (ADS)

    Jung, Joon-Hee

    2016-12-01

    The global atmospheric models based on the Multi-scale Modeling Framework (MMF) are able to explicitly resolve subgrid-scale processes by using embedded 2-D Cloud-Resolving Models (CRMs). Up to now, however, those models do not include the orographic effects on the CRM grid scale. This study shows that the effects of CRM grid-scale orography can be simulated reasonably well by the Quasi-3-D MMF (Q3D MMF), which has been developed as a second-generation MMF. In the Q3D framework, the surface topography can be included in the CRM component by using a block representation of the mountains, so that no smoothing of the topographic height is necessary. To demonstrate the performance of such a model, the orographic effects over a steep mountain are simulated in an idealized experimental setup with each of the Q3D MMF and the full 3-D CRM. The latter is used as a benchmark. Comparison of the results shows that the Q3D MMF is able to reproduce the horizontal distribution of orographic precipitation and the flow changes around mountains as simulated by the 3-D CRM, even though the embedded CRMs of the Q3D MMF recognize only some aspects of the complex 3-D topography. It is also shown that the use of 3-D CRMs in the Q3D framework, rather than 2-D CRMs, has positive impacts on the simulation of wind fields but does not substantially change the simulated precipitation.

  2. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  3. PIONEER VENUS 2 MULTI-PROBE PARACHUTE TESTS IN THE VAB SHOWS OPEN PARACHUTE

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A parachute system, designed to carry an instrument-laden probe down through the dense atmosphere of torrid, cloud-shrouded Venus, was tested in KSC's Vehicle Assembly Building. The tests are in preparation for a Pioneer multi-probe mission to Venus scheduled for launch from KSC in 1978. Full-scale (12-foot diameter) parachutes with simulated pressure vessels weighing up to 45 pounds were dropped from heights of up to 450 feet tot he floor of the VAB where the impact was cushioned by a honeycomb cardboard impact arrestor. The VAB offers an ideal, wind-free testing facility at no additional construction cost and was used for similar tests of the parachute system for the twin Viking spacecraft scheduled for launch toward Mars in August.

  4. PIONEER VENUS 2 MULTI-PROBE PARACHUTE TESTS IN VAB WITH PARACHUTE HOISTED HIGH

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A parachute system, designed to carry an instrument-laden probe down through the dense atmosphere of torrid, cloud-shrouded Venus, was tested in KSC's Vehicle Assembly Building. The tests are in preparation for a Pioneer multi-probe mission to Venus scheduled for launch from KSC in 1978. Full-scale (12-foot diameter) parachutes with simulated pressure vessels weighing up to 45 pounds were dropped from heights of up to 450 feet tot he floor of the VAB where the impact was cushioned by a honeycomb cardboard impact arrestor. The VAB offers an ideal, wind-free testing facility at no additional construction cost and was used for similar tests of the parachute system for the twin Viking spacecraft scheduled for launch toward Mars in August.

  5. PIONEER VENUS 2 MULTI-PROBE PARACHUTE TESTS IN VAB PRIOR TO ATTACHING PRESSURE VESSEL

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A parachute system, designed to carry an instrument-laden probe down through the dense atmosphere of torrid, cloud-shrouded Venus, was tested in KSC's Vehicle Assembly Building. The tests are in preparation for a Pioneer multi-probe mission to Venus scheduled for launch from KSC in 1978. Full-scale (12-foot diameter) parachutes with simulated pressure vessels weighing up to 45 pounds were dropped from heights of up to 450 feet tot he floor of the VAB where the impact was cushioned by a honeycomb cardboard impact arrestor. The VAB offers an ideal, wind-free testing facility at no additional construction cost and was used for similar tests of the parachute system for the twin Viking spacecraft scheduled for launch toward Mars in August.

  6. PIONEER VENUS 2 MULTI-PROBE PARACHUTE TESTS IN THE VEHICLE ASSEMBLY BUILDING

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A parachute system, designed to carry an instrument-laden probe down through the dense atmosphere of torrid, cloud-shrouded Venus, was tested in KSC's Vehicle Assembly Building. The tests are in preparation for a Pioneer multi-probe mission to Venus scheduled for launch from KSC in 1978. Full-scale (12-foot diameter) parachutes with simulated pressure vessels weighing up to 45 pounds were dropped from heights of up to 450 feet tot he floor of the VAB where the impact was cushioned by a honeycomb cardboard impact arrestor. The VAB offers an ideal, wind-free testing facility at no additional construction cost and was used for similar tests of the parachute system for the twin Viking spacecraft scheduled for launch toward Mars in August.

  7. Simulating neural systems with Xyce.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiek, Richard Louis; Thornquist, Heidi K.; Mei, Ting

    2012-12-01

    Sandias parallel circuit simulator, Xyce, can address large scale neuron simulations in a new way extending the range within which one can perform high-fidelity, multi-compartment neuron simulations. This report documents the implementation of neuron devices in Xyce, their use in simulation and analysis of neuron systems.

  8. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  9. Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)

    1999-01-01

    We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.

  10. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.

  11. Application of large-scale, multi-resolution watershed modeling framework using the Hydrologic and Water Quality System (HAWQS)

    USDA-ARS?s Scientific Manuscript database

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...

  12. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    USDA-ARS?s Scientific Manuscript database

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  13. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    NASA Astrophysics Data System (ADS)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  14. [Learning together for working together: interprofessionalism in simulation training for collaborative skills development].

    PubMed

    Policard, Florence

    2014-06-01

    The use of simulation as an educational tool is becoming more widespread in healthcare. Such training gathers doctors and nurses together, which is a rare opportunity in such a sector. The present research focuses on the contribution of inter-professional training to the development of collaborative skills when managing an emergency situation in the context of anesthesia or intensive care. From direct observations of post-simulation debriefing sessions and interviews held with learners in post graduate or in-service training, either in single or multi-professional groups, this study shows that these sessions, based on experiential learning and reflective practice, help to build a shared vision of the problem and of common operative patterns, supporting better communication and the "ability to work in a team".

  15. Reforming primary healthcare: from public policy to organizational change.

    PubMed

    Gilbert, Frédéric; Denis, Jean-Louis; Lamothe, Lise; Beaulieu, Marie-Dominique; D'amour, Danielle; Goudreau, Johanne

    2015-01-01

    Governments everywhere are implementing reform to improve primary care. However, the existence of a high degree of professional autonomy makes large-scale change difficult to achieve. The purpose of this paper is to elucidate the change dynamics and the involvement of professionals in a primary healthcare reform initiative carried out in the Canadian province of Quebec. An empirical approach was used to investigate change processes from the inception of a public policy to the execution of changes in professional practices. The data were analysed from a multi-level, combined contextualist-processual perspective. Results are based on a longitudinal multiple-case study of five family medicine groups, which was informed by over 100 interviews, questionnaires, and documentary analysis. The results illustrate the multiple processes observed with the introduction of planned large-scale change in primary care services. The analysis of change content revealed that similar post-change states concealed variations between groups in the scale of their respective changes. The analysis also demonstrated more precisely how change evolved through the introduction of "intermediate change" and how cycles of prescribed and emergent mechanisms distinctively drove change process and change content, from the emergence of the public policy to the change in primary care service delivery. This research was conducted among a limited number of early policy adopters. However, given the international interest in turning to the medical profession to improve primary care, the results offer avenues for both policy development and implementation. The findings offer practical insights for those studying and managing large-scale transformations. They provide a better understanding of how deliberate reforms coexist with professional autonomy through an intertwining of change content and processes. This research is one of few studies to examine a primary care reform from emergence to implementation using a longitudinal multi-level design.

  16. Are Physics-Based Simulators Ready for Prime Time? Comparisons of RSQSim with UCERF3 and Observations.

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Shaw, B. E.; Gilchrist, J. J.; Jordan, T. H.

    2017-12-01

    Probabilistic seismic hazard analysis (PSHA) is typically performed by combining an earthquake rupture forecast (ERF) with a set of empirical ground motion prediction equations (GMPEs). ERFs have typically relied on observed fault slip rates and scaling relationships to estimate the rate of large earthquakes on pre-defined fault segments, either ignoring or relying on expert opinion to set the rates of multi-fault or multi-segment ruptures. Version 3 of the Uniform California Earthquake Rupture Forecast (UCERF3) is a significant step forward, replacing expert opinion and fault segmentation with an inversion approach that matches observations better than prior models while incorporating multi-fault ruptures. UCERF3 is a statistical model, however, and doesn't incorporate the physics of earthquake nucleation, rupture propagation, and stress transfer. We examine the feasibility of replacing UCERF3, or components therein, with physics-based rupture simulators such as the Rate-State Earthquake Simulator (RSQSim), developed by Dieterich & Richards-Dinger (2010). RSQSim simulations on the UCERF3 fault system produce catalogs of seismicity that match long term rates on major faults, and produce remarkable agreement with UCERF3 when carried through to PSHA calculations. Averaged over a representative set of sites, the RSQSim-UCERF3 hazard-curve differences are comparable to the small differences between UCERF3 and its predecessor, UCERF2. The hazard-curve agreement between the empirical and physics-based models provides substantial support for the PSHA methodology. RSQSim catalogs include many complex multi-fault ruptures, which we compare with the UCERF3 rupture-plausibility metrics as well as recent observations. Complications in generating physically plausible kinematic descriptions of multi-fault ruptures have thus far prevented us from using UCERF3 in the CyberShake physics-based PSHA platform, which replaces GMPEs with deterministic ground motion simulations. RSQSim produces full slip/time histories that can be directly implemented as sources in CyberShake, without relying on the conditional hypocenter and slip distributions needed for the UCERF models. We also compare RSQSim with time-dependent PSHA calculations based on multi-fault renewal models.

  17. Engineering design of sub-micron topographies for simultaneously adherent and reflective metal-polymer interfaces

    NASA Technical Reports Server (NTRS)

    Brown, Christopher A.

    1993-01-01

    The approach of the project is to base the design of multi-function, reflective topographies on the theory that topographically dependent phenomena react with surfaces and interfaces at certain scales. The first phase of the project emphasizes the development of methods for understanding the sizes of topographic features which influence reflectivity. Subsequent phases, if necessary, will address the scales of interaction for adhesion and manufacturing processes. A simulation of the interaction of electromagnetic radiation, or light, with a reflective surface is performed using specialized software. Reflectivity of the surface as a function of scale is evaluated and the results from the simulation are compared with reflectivity measurements made on multi-function, reflective surfaces.

  18. 78 FR 71785 - Passenger Train Emergency Systems II

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-29

    ... in debriefing and critique sessions following emergency situations and full-scale simulations. DATES... Session Following Emergency Situations and Full-Scale Simulations V. Section-by-Section Analysis A... and simulations. As part of these amendments, FRA is incorporating by reference three American Public...

  19. DSC: software tool for simulation-based design of control strategies applied to wastewater treatment plants.

    PubMed

    Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2011-01-01

    This paper presents a computer tool called DSC (Simulation based Controllers Design) that enables an easy design of control systems and strategies applied to wastewater treatment plants. Although the control systems are developed and evaluated by simulation, this tool aims to facilitate the direct implementation of the designed control system to the PC of the full-scale WWTP (wastewater treatment plants). The designed control system can be programmed in a dedicated control application and can be connected to either the simulation software or the SCADA of the plant. To this end, the developed DSC incorporates an OPC server (OLE for process control) which facilitates an open-standard communication protocol for different industrial process applications. The potential capabilities of the DSC tool are illustrated through the example of a full-scale application. An aeration control system applied to a nutrient removing WWTP was designed, tuned and evaluated with the DSC tool before its implementation in the full scale plant. The control parameters obtained by simulation were suitable for the full scale plant with only few modifications to improve the control performance. With the DSC tool, the control systems performance can be easily evaluated by simulation. Once developed and tuned by simulation, the control systems can be directly applied to the full-scale WWTP.

  20. 'In situ simulation' versus 'off site simulation' in obstetric emergencies and their effect on knowledge, safety attitudes, team performance, stress, and motivation: study protocol for a randomized controlled trial

    PubMed Central

    2013-01-01

    Background Unexpected obstetric emergencies threaten the safety of pregnant women. As emergencies are rare, they are difficult to learn. Therefore, simulation-based medical education (SBME) seems relevant. In non-systematic reviews on SBME, medical simulation has been suggested to be associated with improved learner outcomes. However, many questions on how SBME can be optimized remain unanswered. One unresolved issue is how 'in situ simulation' (ISS) versus 'off site simulation' (OSS) impact learning. ISS means simulation-based training in the actual patient care unit (in other words, the labor room and operating room). OSS means training in facilities away from the actual patient care unit, either at a simulation centre or in hospital rooms that have been set up for this purpose. Methods and design The objective of this randomized trial is to study the effect of ISS versus OSS on individual learning outcome, safety attitude, motivation, stress, and team performance amongst multi-professional obstetric-anesthesia teams. The trial is a single-centre randomized superiority trial including 100 participants. The inclusion criteria were health-care professionals employed at the department of obstetrics or anesthesia at Rigshospitalet, Copenhagen, who were working on shifts and gave written informed consent. Exclusion criteria were managers with staff responsibilities, and staff who were actively taking part in preparation of the trial. The same obstetric multi-professional training was conducted in the two simulation settings. The experimental group was exposed to training in the ISS setting, and the control group in the OSS setting. The primary outcome is the individual score on a knowledge test. Exploratory outcomes are individual scores on a safety attitudes questionnaire, a stress inventory, salivary cortisol levels, an intrinsic motivation inventory, results from a questionnaire evaluating perceptions of the simulation and suggested changes needed in the organization, a team-based score on video-assessed team performance and on selected clinical performance. Discussion The perspective is to provide new knowledge on contextual effects of different simulation settings. Trial registration ClincialTrials.gov NCT01792674. PMID:23870501

  1. 'In situ simulation' versus 'off site simulation' in obstetric emergencies and their effect on knowledge, safety attitudes, team performance, stress, and motivation: study protocol for a randomized controlled trial.

    PubMed

    Sørensen, Jette Led; Van der Vleuten, Cees; Lindschou, Jane; Gluud, Christian; Østergaard, Doris; LeBlanc, Vicki; Johansen, Marianne; Ekelund, Kim; Albrechtsen, Charlotte Krebs; Pedersen, Berit Woetman; Kjærgaard, Hanne; Weikop, Pia; Ottesen, Bent

    2013-07-17

    Unexpected obstetric emergencies threaten the safety of pregnant women. As emergencies are rare, they are difficult to learn. Therefore, simulation-based medical education (SBME) seems relevant. In non-systematic reviews on SBME, medical simulation has been suggested to be associated with improved learner outcomes. However, many questions on how SBME can be optimized remain unanswered. One unresolved issue is how 'in situ simulation' (ISS) versus 'off site simulation' (OSS) impact learning. ISS means simulation-based training in the actual patient care unit (in other words, the labor room and operating room). OSS means training in facilities away from the actual patient care unit, either at a simulation centre or in hospital rooms that have been set up for this purpose. The objective of this randomized trial is to study the effect of ISS versus OSS on individual learning outcome, safety attitude, motivation, stress, and team performance amongst multi-professional obstetric-anesthesia teams.The trial is a single-centre randomized superiority trial including 100 participants. The inclusion criteria were health-care professionals employed at the department of obstetrics or anesthesia at Rigshospitalet, Copenhagen, who were working on shifts and gave written informed consent. Exclusion criteria were managers with staff responsibilities, and staff who were actively taking part in preparation of the trial. The same obstetric multi-professional training was conducted in the two simulation settings. The experimental group was exposed to training in the ISS setting, and the control group in the OSS setting. The primary outcome is the individual score on a knowledge test. Exploratory outcomes are individual scores on a safety attitudes questionnaire, a stress inventory, salivary cortisol levels, an intrinsic motivation inventory, results from a questionnaire evaluating perceptions of the simulation and suggested changes needed in the organization, a team-based score on video-assessed team performance and on selected clinical performance. The perspective is to provide new knowledge on contextual effects of different simulation settings. ClincialTrials.gov NCT01792674.

  2. Dynamics analysis of the fast-slow hydro-turbine governing system with different time-scale coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Chen, Diyi; Wu, Changzhi; Wang, Xiangyu

    2018-01-01

    Multi-time scales modeling of hydro-turbine governing system is crucial in precise modeling of hydropower plant and provides support for the stability analysis of the system. Considering the inertia and response time of the hydraulic servo system, the hydro-turbine governing system is transformed into the fast-slow hydro-turbine governing system. The effects of the time-scale on the dynamical behavior of the system are analyzed and the fast-slow dynamical behaviors of the system are investigated with different time-scale. Furthermore, the theoretical analysis of the stable regions is presented. The influences of the time-scale on the stable region are analyzed by simulation. The simulation results prove the correctness of the theoretical analysis. More importantly, the methods and results of this paper provide a perspective to multi-time scales modeling of hydro-turbine governing system and contribute to the optimization analysis and control of the system.

  3. Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture

    NASA Astrophysics Data System (ADS)

    Hassan, Ezeldin A.

    Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.

  4. Multi-scale and multi-physics simulations using the multi-fluid plasma model

    DTIC Science & Technology

    2017-04-25

    small The simulation uses 512 second-order elements Bz = 1.0, Te = Ti = 0.01, ui = ue = 0 ne = ni = 1.0 + e−10(x−6) 2 Baboolal, Math . and Comp. Sim. 55...DISTRIBUTION Clearance No. 17211 23 / 31 SUMMARY The blended finite element method (BFEM) is presented DG spatial discretization with explicit Runge...Kutta (i+, n) CG spatial discretization with implicit Crank-Nicolson (e−, fileds) DG captures shocks and discontinuities CG is efficient and robust for

  5. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Treesearch

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  6. DIFFUSION IN THE VICINITY OF STANDARD-DESIGN NUCLEAR POWER PLANTS-I. WIND-TUNNEL EVALUATION OF DIFFUSIVE CHARACTERISTICS OF A SIMULATED SUBURBAN NEUTRAL ATMOSPHERIC BOUNDARY LAYER

    EPA Science Inventory

    A large meteorological wind tunnel was used to simulate a suburban atmospheric boundary layer. The model-prototype scale was 1:300 and the roughness length was approximately 1.0 m full scale. The model boundary layer simulated full scale dispersion from ground-level and elevated ...

  7. “See One, Sim One, Do One”- A National Pre-Internship Boot-Camp to Ensure a Safer "Student to Doctor" Transition

    PubMed Central

    Sagi, Doron; Berkenstadt, Haim; Ziv, Amitai

    2016-01-01

    Introduction The transition for being a medical student to a full functioning intern is accompanied by considerable stress and sense of unpreparedness. Simulation based workshops were previously reported to be effective in improving the readiness of interns and residents to their daily needed skills but only few programs were implemented on a large scale. Methods A nationally endorsed and mandated pre-internship simulation based workshop is reported. We hypothesized that this intervention will have a meaningful and sustained impact on trainees' perception of their readiness to internship with regard to patient safety and quality of care skills. Main outcome measure was the workshop’s contribution to professional training in general and to critical skills and error prevention in particular, as perceived by participants. Results Between 2004 and 2011, 85 workshops were conducted for a total of 4,172 trainees. Eight-hundred and six of the 2,700 participants approached by e-mail, returned feedback evaluation forms, which were analyzed. Eighty five percent of trainees perceived the workshop as an essential component of their professional training, and 87% agreed it should be mandatory. These ratings peaked during internship and were generally sustained 3 years following the workshop. Contribution to emergency care skills was especially highly ranked (83%). Conclusion Implementation of a mandatory, simulation-based, pre-internship workshop on a national scale made a significant perceived impact on interns and residents. The sustained impact should encourage adopting this approach to facilitate the student to doctor transition. PMID:26934593

  8. Experimental and analytical studies of advanced air cushion landing systems

    NASA Technical Reports Server (NTRS)

    Lee, E. G. S.; Boghani, A. B.; Captain, K. M.; Rutishauser, H. J.; Farley, H. L.; Fish, R. B.; Jeffcoat, R. L.

    1981-01-01

    Several concepts are developed for air cushion landing systems (ACLS) which have the potential for improving performance characteristics (roll stiffness, heave damping, and trunk flutter), and reducing fabrication cost and complexity. After an initial screening, the following five concepts were evaluated in detail: damped trunk, filled trunk, compartmented trunk, segmented trunk, and roll feedback control. The evaluation was based on tests performed on scale models. An ACLS dynamic simulation developed earlier is updated so that it can be used to predict the performance of full-scale ACLS incorporating these refinements. The simulation was validated through scale-model tests. A full-scale ACLS based on the segmented trunk concept was fabricated and installed on the NASA ACLS test vehicle, where it is used to support advanced system development. A geometrically-scaled model (one third full scale) of the NASA test vehicle was fabricated and tested. This model, evaluated by means of a series of static and dynamic tests, is used to investigate scaling relationships between reduced and full-scale models. The analytical model developed earlier is applied to simulate both the one third scale and the full scale response.

  9. Probabilistic Multi-Scale, Multi-Level, Multi-Disciplinary Analysis and Optimization of Engine Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2000-01-01

    Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.

  10. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    NASA Astrophysics Data System (ADS)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving mechanism of urbanization and provide decision-making support for urban management.

  11. Science based integrated approach to advanced nuclear fuel development - integrated multi-scale multi-physics hierarchical modeling and simulation framework Part III: cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tome, Carlos N; Caro, J A; Lebensohn, R A

    2010-01-01

    Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less

  12. Urban Flow and Pollutant Dispersion Simulation with Multi-scale coupling of Meteorological Model with Computational Fluid Dynamic Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Yushi; Poh, Hee Joo

    2014-11-01

    The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.

  13. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    NASA Astrophysics Data System (ADS)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial aerodynamicists, despite their increased interest among the research communities.

  14. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    DOE PAGES

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2014-11-23

    This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.

  15. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  16. Healthcare Students' Perceptions of a Simulated Interprofessional Consultation in an Outpatient Clinic

    ERIC Educational Resources Information Center

    Pitout, H.; Human, A.; Treadwell, I.; Sobantu, N. A.

    2016-01-01

    Newly graduated healthcare workers should appreciate the importance of teamwork and each profession's unique role in a multi-disciplinary team. At Medunsa, an institution for higher education of healthcare professionals, each profession's teaching occurs independently. This study explores the perceptions of healthcare students and their…

  17. Full-Scale Accelerated Testing of Multi-axial Geogrid Stabilized Flexible Pavements

    DTIC Science & Technology

    2017-06-01

    costs and reduced budgets, transportation officials are often tasked with applying innovative solutions to pavement design and construction projects... pavement designers . 1.2 Objective The objective of this effort was to construct and traffic full-scale flexible pavement sections to provide...Development Center (ERDC) constructed the full-scale test section as designed by Tensar under shelter in its Hangar 2 Pavement Test Facility. During

  18. A fast method for finding bound systems in numerical simulations: Results from the formation of asteroid binaries

    NASA Astrophysics Data System (ADS)

    Leinhardt, Zoë M.; Richardson, Derek C.

    2005-08-01

    We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.

  19. Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations

    NASA Astrophysics Data System (ADS)

    Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod

    2016-11-01

    Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.

  20. Accelerating three-dimensional FDTD calculations on GPU clusters for electromagnetic field simulation.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2012-01-01

    Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.

  1. Airfoil Ice-Accretion Aerodynamics Simulation

    NASA Technical Reports Server (NTRS)

    Bragg, Michael B.; Broeren, Andy P.; Addy, Harold E.; Potapczuk, Mark G.; Guffond, Didier; Montreuil, E.

    2007-01-01

    NASA Glenn Research Center, ONERA, and the University of Illinois are conducting a major research program whose goal is to improve our understanding of the aerodynamic scaling of ice accretions on airfoils. The program when it is completed will result in validated scaled simulation methods that produce the essential aerodynamic features of the full-scale iced-airfoil. This research will provide some of the first, high-fidelity, full-scale, iced-airfoil aerodynamic data. An initial study classified ice accretions based on their aerodynamics into four types: roughness, streamwise ice, horn ice, and spanwise-ridge ice. Subscale testing using a NACA 23012 airfoil was performed in the NASA IRT and University of Illinois wind tunnel to better understand the aerodynamics of these ice types and to test various levels of ice simulation fidelity. These studies are briefly reviewed here and have been presented in more detail in other papers. Based on these results, full-scale testing at the ONERA F1 tunnel using cast ice shapes obtained from molds taken in the IRT will provide full-scale iced airfoil data from full-scale ice accretions. Using these data as a baseline, the final step is to validate the simulation methods in scale in the Illinois wind tunnel. Computational ice accretion methods including LEWICE and ONICE have been used to guide the experiments and are briefly described and results shown. When full-scale and simulation aerodynamic results are available, these data will be used to further develop computational tools. Thus the purpose of the paper is to present an overview of the program and key results to date.

  2. Multi-scale heterogeneity of the 2011 Great Tohoku-oki Earthquake from dynamic simulations

    NASA Astrophysics Data System (ADS)

    Aochi, H.; Ide, S.

    2011-12-01

    In order to explain the scaling issues of earthquakes of different sizes, multi-scale heterogeneity conception is necessary to characterize earthquake faulting property (Ide and Aochi, JGR, 2005; Aochi and Ide, JGR, 2009).The 2011 Great Tohoku-oki earthquake (M9) is characterized by a slow initial phase of about M7, a M8 class deep rupture, and a M9 main rupture with quite large slip near the trench (e.g. Ide et al., Science, 2011) as well as the presence of foreshocks. We dynamically model these features based on the multi-scale conception. We suppose a significantly large fracture energy (corresponding to slip-weakening distance of 3.2 m) in most of the fault dimension to represent the M9 rupture. However we give local heterogeneity with relatively small circular patches of smaller fracture energy, by assuming the linear scaling relation between the radius and fracture energy. The calculation is carried out using 3D Boundary Integral Equation Method. We first begin only with the mainshock (Aochi and Ide, EPS, 2011), but later we find it important to take into account of a series of foreshocks since the 9th March (M7.4). The smaller patches including the foreshock area are necessary to launch the M9 rupture area of large fracture energy. We then simulate the ground motion in low frequencies using Finite Difference Method. Qualitatively, the observed tendency is consistent with our simulations, in the meaning of the transition from the central part to the southern part in low frequencies (10 - 20 sec). At higher frequencies (1-10 sec), further small asperities are inferred in the observed signals, and this feature matches well with our multi-scale conception.

  3. Comparison of self, physician, and simulated patient ratings of pharmacist performance in a family practice simulator.

    PubMed

    Lau, Elaine; Dolovich, Lisa; Austin, Zubin

    2007-03-01

    The Family Practice Simulator (FPS) was piloted as a teaching, learning, and assessment opportunity for pharmacists making the transition into primary care practice. During this one-day simulation of a typical day in a family physician's office, nine pharmacists rotated through a series of 13 OSCE stations where they interacted with physicians, patients, nurses and office staff while completing primary care activities and receiving performance evaluations. Pharmacists' performance ratings from self, physician, and standardized patient evaluations were compared using Global Rating Scales (GRS) scores and station-specific key points checklists. The mean (SD) overall GRS scores obtained by pharmacists across all stations in the FPS were 4.56 (SD = 0.60) from standardized patients, 3.95 (SD = 0.63) from physicians, and 3.60 (SD = 0.63) from self-assessment (out of a maximum score of 5). Agreement between pharmacists' and patients' GRS ratings ranged from moderate to good (generalizability coefficient (G) = 0.45 to 0.72) for all except one station. Agreement in GRS scores between pharmacists and physicians was at most fair for every station (G = 0.02 - 0.26). There was fair agreement on key points scores between pharmacists and patients (weighted kappa = 27%; 95% CI 7%, 47%) and moderate agreement between pharmacists and physicians (weighted kappa = 45%; 95% CI 21%, 70%). Although there was at best moderate agreement in rating scores between pharmacists, standardized patients, and physicians, the FPS provided an important opportunity to measure expectations regarding the professional role, responsibilities, and performance of pharmacists from a multi-professional perspective, thus better preparing pharmacists for integration into primary care practice. Differences in agreement may have been due to different preconceptions and expectations among raters.

  4. Multi-Hazard Interactions in Guatemala

    NASA Astrophysics Data System (ADS)

    Gill, Joel; Malamud, Bruce D.

    2017-04-01

    In this paper, we combine physical and social science approaches to develop a multi-scale regional framework for natural hazard interactions in Guatemala. The identification and characterisation of natural hazard interactions is an important input for comprehensive multi-hazard approaches to disaster risk reduction at a regional level. We use five transdisciplinary evidence sources to organise and populate our framework: (i) internationally-accessible literature; (ii) civil protection bulletins; (iii) field observations; (iv) stakeholder interviews (hazard and civil protection professionals); and (v) stakeholder workshop results. These five evidence sources are synthesised to determine an appropriate natural hazard classification scheme for Guatemala (6 hazard groups, 19 hazard types, and 37 hazard sub-types). For a national spatial extent (Guatemala), we construct and populate a "21×21" hazard interaction matrix, identifying 49 possible interactions between 21 hazard types. For a sub-national spatial extent (Southern Highlands, Guatemala), we construct and populate a "33×33" hazard interaction matrix, identifying 112 possible interactions between 33 hazard sub-types. Evidence sources are also used to constrain anthropogenic processes that could trigger natural hazards in Guatemala, and characterise possible networks of natural hazard interactions (cascades). The outcomes of this approach are among the most comprehensive interaction frameworks for national and sub-national spatial scales in the published literature. These can be used to support disaster risk reduction and civil protection professionals in better understanding natural hazards and potential disasters at a regional scale.

  5. GPU-based Space Situational Awareness Simulation utilising Parallelism for Enhanced Multi-sensor Management

    NASA Astrophysics Data System (ADS)

    Hobson, T.; Clarkson, V.

    2012-09-01

    As a result of continual space activity since the 1950s, there are now a large number of man-made Resident Space Objects (RSOs) orbiting the Earth. Because of the large number of items and their relative speeds, the possibility of destructive collisions involving important space assets is now of significant concern to users and operators of space-borne technologies. As a result, a growing number of international agencies are researching methods for improving techniques to maintain Space Situational Awareness (SSA). Computer simulation is a method commonly used by many countries to validate competing methodologies prior to full scale adoption. The use of supercomputing and/or reduced scale testing is often necessary to effectively simulate such a complex problem on todays computers. Recently the authors presented a simulation aimed at reducing the computational burden by selecting the minimum level of fidelity necessary for contrasting methodologies and by utilising multi-core CPU parallelism for increased computational efficiency. The resulting simulation runs on a single PC while maintaining the ability to effectively evaluate competing methodologies. Nonetheless, the ability to control the scale and expand upon the computational demands of the sensor management system is limited. In this paper, we examine the advantages of increasing the parallelism of the simulation by means of General Purpose computing on Graphics Processing Units (GPGPU). As many sub-processes pertaining to SSA management are independent, we demonstrate how parallelisation via GPGPU has the potential to significantly enhance not only research into techniques for maintaining SSA, but also to enhance the level of sophistication of existing space surveillance sensors and sensor management systems. Nonetheless, the use of GPGPU imposes certain limitations and adds to the implementation complexity, both of which require consideration to achieve an effective system. We discuss these challenges and how they can be overcome. We further describe an application of the parallelised system where visibility prediction is used to enhance sensor management. This facilitates significant improvement in maximum catalogue error when RSOs become temporarily unobservable. The objective is to demonstrate the enhanced scalability and increased computational capability of the system.

  6. Based on a multi-agent system for multi-scale simulation and application of household's LUCC: a case study for Mengcha village, Mizhi county, Shaanxi province.

    PubMed

    Chen, Hai; Liang, Xiaoying; Li, Rui

    2013-01-01

    Multi-Agent Systems (MAS) offer a conceptual approach to include multi-actor decision making into models of land use change. Through the simulation based on the MAS, this paper tries to show the application of MAS in the micro scale LUCC, and reveal the transformation mechanism of difference scale. This paper starts with a description of the context of MAS research. Then, it adopts the Nested Spatial Choice (NSC) method to construct the multi-scale LUCC decision-making model. And a case study for Mengcha village, Mizhi County, Shaanxi Province is reported. Finally, the potentials and drawbacks of the following approach is discussed and concluded. From our design and implementation of the MAS in multi-scale model, a number of observations and conclusions can be drawn on the implementation and future research directions. (1) The use of the LUCC decision-making and multi-scale transformation framework provides, according to us, a more realistic modeling of multi-scale decision making process. (2) By using continuous function, rather than discrete function, to construct the decision-making of the households is more realistic to reflect the effect. (3) In this paper, attempts have been made to give a quantitative analysis to research the household interaction. And it provides the premise and foundation for researching the communication and learning among the households. (4) The scale transformation architecture constructed in this paper helps to accumulate theory and experience for the interaction research between the micro land use decision-making and the macro land use landscape pattern. Our future research work will focus on: (1) how to rational use risk aversion principle, and put the rule on rotation between household parcels into model. (2) Exploring the methods aiming at researching the household decision-making over a long period, it allows us to find the bridge between the long-term LUCC data and the short-term household decision-making. (3) Researching the quantitative method and model, especially the scenario analysis model which may reflect the interaction among different household types.

  7. Multiscale modeling and simulation of brain blood flow

    NASA Astrophysics Data System (ADS)

    Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em

    2016-02-01

    The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.

  8. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    PubMed

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  9. Numerical Simulation of DC Coronal Heating

    NASA Astrophysics Data System (ADS)

    Dahlburg, Russell B.; Einaudi, G.; Taylor, Brian D.; Ugarte-Urra, Ignacio; Warren, Harry; Rappazzo, A. F.; Velli, Marco

    2016-05-01

    Recent research on observational signatures of turbulent heating of a coronal loop will be discussed. The evolution of the loop is is studied by means of numerical simulations of the fully compressible three-dimensional magnetohydrodynamic equations using the HYPERION code. HYPERION calculates the full energy cycle involving footpoint convection, magnetic reconnection, nonlinear thermal conduction and optically thin radiation. The footpoints of the loop magnetic field are convected by random photospheric motions. As a consequence the magnetic field in the loop is energized and develops turbulent nonlinear dynamics characterized by the continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Dissipation is non-uniformly distributed so that only a fraction of thecoronal mass and volume gets heated at any time. Temperature and density are highly structured at scales which, in the solar corona, remain observationally unresolved: the plasma of the simulated loop is multi thermal, where highly dynamical hotter and cooler plasma strands are scattered throughout the loop at sub-observational scales. Typical simulated coronal loops are 50000 km length and have axial magnetic field intensities ranging from 0.01 to 0.04 Tesla. To connect these simulations to observations the computed number densities and temperatures are used to synthesize the intensities expected in emission lines typically observed with the Extreme ultraviolet Imaging Spectrometer (EIS) on Hinode. These intensities are then employed to compute differential emission measure distributions, which are found to be very similar to those derived from observations of solar active regions.

  10. Multifractal characterisation of a simulated surface flow: A case study with Multi-Hydro in Jouy-en-Josas, France

    NASA Astrophysics Data System (ADS)

    Gires, Auguste; Abbes, Jean-Baptiste; da Silva Rocha Paz, Igor; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2018-03-01

    In this paper we suggest to innovatively use scaling laws and more specifically Universal Multifractals (UM) to analyse simulated surface runoff and compare the retrieved scaling features with the rainfall ones. The methodology is tested on a 3 km2 semi-urbanised with a steep slope study area located in the Paris area along the Bièvre River. First Multi-Hydro, a fully distributed model is validated on this catchment for four rainfall events measured with the help of a C-band radar. The uncertainty associated with small scale unmeasured rainfall, i.e. occurring below the 1 km × 1 km × 5 min observation scale, is quantified with the help of stochastic downscaled rainfall fields. It is rather significant for simulated flow and more limited on overland water depth for these rainfall events. Overland depth is found to exhibit a scaling behaviour over small scales (10 m-80 m) which can be related to fractal features of the sewer network. No direct and obvious dependency between the overland depth multifractal features (quality of the scaling and UM parameters) and the rainfall ones was found.

  11. Mesoscale Effective Property Simulations Incorporating Conductive Binder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.

    Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less

  12. Mesoscale Effective Property Simulations Incorporating Conductive Binder

    DOE PAGES

    Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.; ...

    2017-07-26

    Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less

  13. Study on launch scheme of space-net capturing system.

    PubMed

    Gao, Qingyu; Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang

    2017-01-01

    With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme.

  14. Study on launch scheme of space-net capturing system

    PubMed Central

    Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang

    2017-01-01

    With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme. PMID:28877187

  15. IsoMAP (Isoscape Modeling, Analysis, and Prediction)

    NASA Astrophysics Data System (ADS)

    Miller, C. C.; Bowen, G. J.; Zhang, T.; Zhao, L.; West, J. B.; Liu, Z.; Rapolu, N.

    2009-12-01

    IsoMAP is a TeraGrid-based web portal aimed at building the infrastructure that brings together distributed multi-scale and multi-format geospatial datasets to enable statistical analysis and modeling of environmental isotopes. A typical workflow enabled by the portal includes (1) data source exploration and selection, (2) statistical analysis and model development; (3) predictive simulation of isotope distributions using models developed in (1) and (2); (4) analysis and interpretation of simulated spatial isotope distributions (e.g., comparison with independent observations, pattern analysis). The gridded models and data products created by one user can be shared and reused among users within the portal, enabling collaboration and knowledge transfer. This infrastructure and the research it fosters can lead to fundamental changes in our knowledge of the water cycle and ecological and biogeochemical processes through analysis of network-based isotope data, but it will be important A) that those with whom the data and models are shared can be sure of the origin, quality, inputs, and processing history of these products, and B) the system is agile and intuitive enough to facilitate this sharing (rather than just ‘allow’ it). IsoMAP researchers are therefore building into the portal’s architecture several components meant to increase the amount of metadata about users’ products and to repurpose those metadata to make sharing and discovery more intuitive and robust to both expected, professional users as well as unforeseeable populations from other sectors.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kant, Deepender, E-mail: dkc@ceeri.ernet.in; Joshi, L. M.; Janyani, Vijay

    The klystron is a well-known microwave amplifier which uses kinetic energy of an electron beam for amplification of the RF signal. There are some limitations of conventional single beam klystron such as high operating voltage, low efficiency and bulky size at higher power levels, which are very effectively handled in Multi Beam Klystron (MBK) that uses multiple low purveyance electron beams for RF interaction. Each beam propagates along its individual transit path through a resonant cavity structure. Multi-Beam klystron cavity design is a critical task due to asymmetric cavity structure and can be simulated by 3D code only. The presentmore » paper shall discuss the design of multi beam RF cavities for klystrons operating at 2856 MHz (S-band) and 5 GHz (C-band) respectively. The design approach uses some scaling laws for finding the electron beam parameters of the multi beam device from their single beam counter parts. The scaled beam parameters are then used for finding the design parameters of the multi beam cavities. Design of the desired multi beam cavity can be optimized through iterative simulations in CST Microwave Studio.« less

  17. Mercury and methylmercury stream concentrations in a Coastal Plain watershed: A multi-scale simulation analysis

    USGS Publications Warehouse

    Knightes, Christopher D.; Golden, Heather E.; Journey, Celeste A.; Davis, Gary M.; Conrads, Paul; Marvin-DiPasquale, Mark; Brigham, Mark E.; Bradley, Paul M.

    2014-01-01

    Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km2) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km2 and 25 km2) and the encompassing watershed (79 km2). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport.

  18. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David; Agarwal, Deborah A.; Sun, Xin

    2011-09-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  19. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.; Agarwal, D.; Sun, X.

    2011-01-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  20. Dynamic Model Investigation of Water Pressures and Accelerations Encountered During Landings of the Apollo Spacecraft

    NASA Technical Reports Server (NTRS)

    Stubbs, Sandy M.

    1967-01-01

    An experimental investigation was made to determine impact water pressures, accelerations, and landing dynamics of a 1/4-scale dynamic model of the command module of the Apollo spacecraft. A scaled-stiffness aft heat shield was used on the model to simulate the structural deflections of the full-scale heat shield. Tests were made on water to obtain impact pressure data at a simulated parachute letdown (vertical) velocity component of approximately 30 ft/sec (9.1 m/sec) full scale. Additional tests were made on water, sand, and hard clay-gravel landing surfaces at simulated vertical velocity components of 23 ft/sec (7.0 m/sec) full scale. Horizontal velocity components investigated ranged from 0 to 50 ft/sec (15 m/sec) full scale and the pitch attitudes ranged from -40 degrees to 29 degrees. Roll attitudes were O degrees, 90 degrees, and 180 degrees, and the yaw attitude was 0 degrees.

  1. Probabilistic Simulation of Multi-Scale Composite Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  2. A point-by-point multi-scale surface temperature reconstruction method and tests by pseudo proxy experiments

    NASA Astrophysics Data System (ADS)

    Chen, X.

    2016-12-01

    This study present a multi-scale approach combining Mode Decomposition and Variance Matching (MDVM) method and basic process of Point-by-Point Regression (PPR) method. Different from the widely applied PPR method, the scanning radius for each grid box, were re-calculated considering the impact from topography (i.e. mean altitudes and fluctuations). Thus, appropriate proxy records were selected to be candidates for reconstruction. The results of this multi-scale methodology could not only provide the reconstructed gridded temperature, but also the corresponding uncertainties of the four typical timescales. In addition, this method can bring in another advantage that spatial distribution of the uncertainty for different scales could be quantified. To interpreting the necessity of scale separation in calibration, with proxy records location over Eastern Asia, we perform two sets of pseudo proxy experiments (PPEs) based on different ensembles of climate model simulation. One consist of 7 simulated results by 5 models (BCC-CSM1-1, CSIRO-MK3L-1-2, HadCM3, MPI-ESM-P, and Giss-E2-R) of the "past1000" simulation from Coupled Model Intercomparison Project Phase 5. The other is based on the simulations of Community Earth System Model Last Millennium Ensemble (CESM-LME). The pseudo-records network were obtained by adding the white noise with signal-to-noise ratio (SNR) increasing from 0.1 to 1.0 to the simulated true state and the locations mainly followed the PAGES-2k network in Asia. Totally, 400 years (1601-2000) simulation was used for calibration and 600 years (1001-1600) for verification. The reconstructed results were evaluated by three metrics 1) root mean squared error (RMSE), 2) correlation and 3) reduction of error (RE) score. The PPE verification results have shown that, in comparison with ordinary linear calibration method (variance matching), the RMSE and RE score of PPR-MDVM are improved, especially for the area with sparse proxy records. To be noted, in some periods with large volcanic activities, the RMSE of MDVM get larger than VM for higher SNR cases. It should be inferred that the volcanic eruptions might blur the intrinsic characteristics of multi-scales variabilities of the climate system and the MDVM method would show less advantage in that case.

  3. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE PAGES

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    2015-09-29

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  4. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  5. IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.

    This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less

  6. Moose: An Open-Source Framework to Enable Rapid Development of Collaborative, Multi-Scale, Multi-Physics Simulation Tools

    NASA Astrophysics Data System (ADS)

    Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.

    2014-12-01

    The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.

  7. Ion kinetic effects on the ignition and burn of inertial confinement fusion targets: A multi-scale approach

    NASA Astrophysics Data System (ADS)

    Peigney, B. E.; Larroche, O.; Tikhonchuk, V.

    2014-12-01

    In this article, we study the hydrodynamics and burn of the thermonuclear fuel in inertial confinement fusion pellets at the ion kinetic level. The analysis is based on a two-velocity-scale Vlasov-Fokker-Planck kinetic model that is specially tailored to treat fusion products (suprathermal α-particles) in a self-consistent manner with the thermal bulk. The model assumes spherical symmetry in configuration space and axial symmetry in velocity space around the mean flow velocity. A typical hot-spot ignition design is considered. Compared with fluid simulations where a multi-group diffusion scheme is applied to model α transport, the full ion-kinetic approach reveals significant non-local effects on the transport of energetic α-particles. This has a direct impact on hydrodynamic spatial profiles during combustion: the hot spot reactivity is reduced, while the inner dense fuel layers are pre-heated by the escaping α-suprathermal particles, which are transported farther out of the hot spot. We show how the kinetic transport enhancement of fusion products leads to a significant reduction of the fusion yield.

  8. Multi-granularity Bandwidth Allocation for Large-Scale WDM/TDM PON

    NASA Astrophysics Data System (ADS)

    Gao, Ziyue; Gan, Chaoqin; Ni, Cuiping; Shi, Qiongling

    2017-12-01

    WDM (wavelength-division multiplexing)/TDM (time-division multiplexing) PON (passive optical network) is being viewed as a promising solution for delivering multiple services and applications, such as high-definition video, video conference and data traffic. Considering the real-time transmission, QoS (quality of services) requirements and differentiated services model, a multi-granularity dynamic bandwidth allocation (DBA) in both domains of wavelengths and time for large-scale hybrid WDM/TDM PON is proposed in this paper. The proposed scheme achieves load balance by using the bandwidth prediction. Based on the bandwidth prediction, the wavelength assignment can be realized fairly and effectively to satisfy the different demands of various classes. Specially, the allocation of residual bandwidth further augments the DBA and makes full use of bandwidth resources in the network. To further improve the network performance, two schemes named extending the cycle of one free wavelength (ECoFW) and large bandwidth shrinkage (LBS) are proposed, which can prevent transmission from interruption when the user employs more than one wavelength. The simulation results show the effectiveness of the proposed scheme.

  9. Ion kinetic effects on the ignition and burn of inertial confinement fusion targets: A multi-scale approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peigney, B. E.; Larroche, O.; Tikhonchuk, V.

    2014-12-15

    In this article, we study the hydrodynamics and burn of the thermonuclear fuel in inertial confinement fusion pellets at the ion kinetic level. The analysis is based on a two-velocity-scale Vlasov-Fokker-Planck kinetic model that is specially tailored to treat fusion products (suprathermal α-particles) in a self-consistent manner with the thermal bulk. The model assumes spherical symmetry in configuration space and axial symmetry in velocity space around the mean flow velocity. A typical hot-spot ignition design is considered. Compared with fluid simulations where a multi-group diffusion scheme is applied to model α transport, the full ion-kinetic approach reveals significant non-local effectsmore » on the transport of energetic α-particles. This has a direct impact on hydrodynamic spatial profiles during combustion: the hot spot reactivity is reduced, while the inner dense fuel layers are pre-heated by the escaping α-suprathermal particles, which are transported farther out of the hot spot. We show how the kinetic transport enhancement of fusion products leads to a significant reduction of the fusion yield.« less

  10. Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.

    PubMed

    Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah

    2009-01-01

    Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.

  11. DEVELOPMENT AND ANALYSIS OF AIR QUALITY MODELING SIMULATIONS FOR HAZARDOUS AIR POLLUTANTS

    EPA Science Inventory

    The concentrations of five hazardous air pollutants were simulated using the Community Multi Scale Air Quality (CMAQ) modeling system. Annual simulations were performed over the continental United States for the entire year of 2001 to support human exposure estimates. Results a...

  12. Multi-scale gyrokinetic simulations: Comparison with experiment and implications for predicting turbulence and transport

    NASA Astrophysics Data System (ADS)

    Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Candy, J.; Creely, A. J.

    2016-05-01

    To better understand the role of cross-scale coupling in experimental conditions, a series of multi-scale gyrokinetic simulations were performed on Alcator C-Mod, L-mode plasmas. These simulations, performed using all experimental inputs and realistic ion to electron mass ratio ((mi/me)1/2 = 60.0), simultaneously capture turbulence at the ion ( kθρs˜O (1.0 ) ) and electron-scales ( kθρe˜O (1.0 ) ). Direct comparison with experimental heat fluxes and electron profile stiffness indicates that Electron Temperature Gradient (ETG) streamers and strong cross-scale turbulence coupling likely exist in both of the experimental conditions studied. The coupling between ion and electron-scales exists in the form of energy cascades, modification of zonal flow dynamics, and the effective shearing of ETG turbulence by long wavelength, Ion Temperature Gradient (ITG) turbulence. The tightly coupled nature of ITG and ETG turbulence in these realistic plasma conditions is shown to have significant implications for the interpretation of experimental transport and fluctuations. Initial attempts are made to develop a "rule of thumb" based on linear physics, to help predict when cross-scale coupling plays an important role and to inform future modeling of experimental discharges. The details of the simulations, comparisons with experimental measurements, and implications for both modeling and experimental interpretation are discussed.

  13. Coupled multi-disciplinary simulation of composite engine structures in propulsion environment

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Singhal, Surendra N.

    1992-01-01

    A computational simulation procedure is described for the coupled response of multi-layered multi-material composite engine structural components which are subjected to simultaneous multi-disciplinary thermal, structural, vibration, and acoustic loadings including the effect of hostile environments. The simulation is based on a three dimensional finite element analysis technique in conjunction with structural mechanics codes and with acoustic analysis methods. The composite material behavior is assessed at the various composite scales, i.e., the laminate/ply/constituents (fiber/matrix), via a nonlinear material characterization model. Sample cases exhibiting nonlinear geometrical, material, loading, and environmental behavior of aircraft engine fan blades, are presented. Results for deformed shape, vibration frequency, mode shapes, and acoustic noise emitted from the fan blade, are discussed for their coupled effect in hot and humid environments. Results such as acoustic noise for coupled composite-mechanics/heat transfer/structural/vibration/acoustic analyses demonstrate the effectiveness of coupled multi-disciplinary computational simulation and the various advantages of composite materials compared to metals.

  14. Multiphase computer-generated holograms for full-color image generation

    NASA Astrophysics Data System (ADS)

    Choi, Kyong S.; Choi, Byong S.; Choi, Yoon S.; Kim, Sun I.; Kim, Jong Man; Kim, Nam; Gil, Sang K.

    2002-06-01

    Multi-phase and binary-phase computer-generated holograms were designed and demonstrated for full-color image generation. Optimize a phase profile of the hologram that achieves each color image, we employed a simulated annealing method. The design binary phase hologram had the diffraction efficiency of 33.23 percent and the reconstruction error of 0.367 X 10-2. And eight phase hologram had the diffraction efficiency of 67.92 percent and the reconstruction error of 0.273 X 10-2. The designed BPH was fabricated by micro photolithographic technique with a minimum pixel width of 5micrometers . And the it was reconstructed using by two Ar-ion lasers and a He-Ne laser. In addition, the color dispersion characteristic of the fabricate grating and scaling problem of the reconstructed image were discussed.

  15. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  16. An investigation into preserving spatially-distinct pore systems in multi-component rocks using a fossiliferous limestone example

    NASA Astrophysics Data System (ADS)

    Jiang, Zeyun; Couples, Gary D.; Lewis, Helen; Mangione, Alessandro

    2018-07-01

    Limestones containing abundant disc-shaped fossil Nummulites can form significant hydrocarbon reservoirs but they have a distinctly heterogeneous distribution of pore shapes, sizes and connectivities, which make it particularly difficult to calculate petrophysical properties and consequent flow outcomes. The severity of the problem rests on the wide length-scale range from the millimetre scale of the fossil's pore space to the micron scale of rock matrix pores. This work develops a technique to incorporate multi-scale void systems into a pore network, which is used to calculate the petrophysical properties for subsequent flow simulations at different stages in the limestone's petrophysical evolution. While rock pore size, shape and connectivity can be determined, with varying levels of fidelity, using techniques such as X-ray computed tomography (CT) or scanning electron microscopy (SEM), this work represents a more challenging class where the rock of interest is insufficiently sampled or, as here, has been overprinted by extensive chemical diagenesis. The main challenge is integrating multi-scale void structures derived from both SEM and CT images, into a single model or a pore-scale network while still honouring the nature of the connections across these length scales. Pore network flow simulations are used to illustrate the technique but of equal importance, to demonstrate how supportable earlier-stage petrophysical property distributions can be used to assess the viability of several potential geological event sequences. The results of our flow simulations on generated models highlight the requirement for correct determination of the dominant pore scales (one plus of nm, μm, mm, cm), the spatial correlation and the cross-scale connections.

  17. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    NASA Astrophysics Data System (ADS)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  18. Multi-Scale Impact and Compression-After-Impact Modeling of Reinforced Benzoxazine/Epoxy Composites using Micromechanics Approach

    NASA Astrophysics Data System (ADS)

    Montero, Marc Villa; Barjasteh, Ehsan; Baid, Harsh K.; Godines, Cody; Abdi, Frank; Nikbin, Kamran

    A multi-scale micromechanics approach along with finite element (FE) model predictive tool is developed to analyze low-energy-impact damage footprint and compression-after-impact (CAI) of composite laminates which is also tested and verified with experimental data. Effective fiber and matrix properties were reverse-engineered from lamina properties using an optimization algorithm and used to assess damage at the micro-level during impact and post-impact FE simulations. Progressive failure dynamic analysis (PFDA) was performed for a two step-process simulation. Damage mechanisms at the micro-level were continuously evaluated during the analyses. Contribution of each failure mode was tracked during the simulations and damage and delamination footprint size and shape were predicted to understand when, where and why failure occurred during both impact and CAI events. The composite laminate was manufactured by the vacuum infusion of the aero-grade toughened Benzoxazine system into the fabric preform. Delamination footprint was measured using C-scan data from the impacted panels and compared with the predicated values obtained from proposed multi-scale micromechanics coupled with FE analysis. Furthermore, the residual strength was predicted from the load-displacement curve and compared with the experimental values as well.

  19. Multi-Algorithm Particle Simulations with Spatiocyte.

    PubMed

    Arjunan, Satya N V; Takahashi, Koichi

    2017-01-01

    As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .

  20. Blade Displacement Measurement Technique Applied to a Full-Scale Rotor Test

    NASA Technical Reports Server (NTRS)

    Abrego, Anita I.; Olson, Lawrence E.; Romander, Ethan A.; Barrows, Danny A.; Burner, Alpheus W.

    2012-01-01

    Blade displacement measurements using multi-camera photogrammetry were acquired during the full-scale wind tunnel test of the UH-60A Airloads rotor, conducted in the National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel. The objectives were to measure the blade displacement and deformation of the four rotor blades as they rotated through the entire rotor azimuth. These measurements are expected to provide a unique dataset to aid in the development and validation of rotorcraft prediction techniques. They are used to resolve the blade shape and position, including pitch, flap, lag and elastic deformation. Photogrammetric data encompass advance ratios from 0.15 to slowed rotor simulations of 1.0, thrust coefficient to rotor solidity ratios from 0.01 to 0.13, and rotor shaft angles from -10.0 to 8.0 degrees. An overview of the blade displacement measurement methodology and system development, descriptions of image processing, uncertainty considerations, preliminary results covering static and moderate advance ratio test conditions and future considerations are presented. Comparisons of experimental and computational results for a moderate advance ratio forward flight condition show good trend agreements, but also indicate significant mean discrepancies in lag and elastic twist. Blade displacement pitch measurements agree well with both the wind tunnel commanded and measured values.

  1. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    NASA Astrophysics Data System (ADS)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  2. The Development of High-speed Full-function Storm Surge Model and the Case Study of 2013 Typhoon Haiyan

    NASA Astrophysics Data System (ADS)

    Tsai, Y. L.; Wu, T. R.; Lin, C. Y.; Chuang, M. H.; Lin, C. W.

    2016-02-01

    An ideal storm surge operational model should feature as: 1. Large computational domain which covers the complete typhoon life cycle. 2. Supporting both parametric and atmospheric models. 3. Capable of calculating inundation area for risk assessment. 4. Tides are included for accurate inundation simulation. Literature review shows that not many operational models reach the goals for the fast calculation, and most of the models have limited functions. In this paper, a well-developed COMCOT (COrnell Multi-grid Coupled of Tsunami Model) tsunami model is chosen as the kernel to establish a storm surge model which solves the nonlinear shallow water equations on both spherical and Cartesian coordinates directly. The complete evolution of storm surge including large-scale propagation and small-scale offshore run-up can be simulated by nested-grid scheme. The global tide model TPXO 7.2 established by Oregon State University is coupled to provide astronomical boundary conditions. The atmospheric model named WRF (Weather Research and Forecasting Model) is also coupled to provide metrological fields. The high-efficiency thin-film method is adopted to evaluate the storm surge inundation. Our in-house model has been optimized by OpenMp (Open Multi-Processing) with the performance which is 10 times faster than the original version and makes it an early-warning storm surge model. In this study, the thorough simulation of 2013 Typhoon Haiyan is performed. The detailed results will be presented in Oceanic Science Meeting of 2016 in terms of surge propagation and high-resolution inundation areas.

  3. Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data

    NASA Astrophysics Data System (ADS)

    Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.

    2017-12-01

    As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.

  4. Interface-Resolving Simulation of Collision Efficiency of Cloud Droplets

    NASA Astrophysics Data System (ADS)

    Wang, Lian-Ping; Peng, Cheng; Rosa, Bodgan; Onishi, Ryo

    2017-11-01

    Small-scale air turbulence could enhance the geometric collision rate of cloud droplets while large-scale air turbulence could augment the diffusional growth of cloud droplets. Air turbulence could also enhance the collision efficiency of cloud droplets. Accurate simulation of collision efficiency, however, requires capture of the multi-scale droplet-turbulence and droplet-droplet interactions, which has only been partially achieved in the recent past using the hybrid direct numerical simulation (HDNS) approach. % where Stokes disturbance flow is assumed. The HDNS approach has two major drawbacks: (1) the short-range droplet-droplet interaction is not treated rigorously; (2) the finite-Reynolds number correction to the collision efficiency is not included. In this talk, using two independent numerical methods, we will develop an interface-resolved simulation approach in which the disturbance flows are directly resolved numerically, combined with a rigorous lubrication correction model for near-field droplet-droplet interaction. This multi-scale approach is first used to study the effect of finite flow Reynolds numbers on the droplet collision efficiency in still air. Our simulation results show a significant finite-Re effect on collision efficiency when the droplets are of similar sizes. Preliminary results on integrating this approach in a turbulent flow laden with droplets will also be presented. This work is partially supported by the National Science Foundation.

  5. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  6. An Expanded Multi-scale Monte Carlo Simulation Method for Personalized Radiobiological Effect Estimation in Radiotherapy: a feasibility study

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping

    2017-03-01

    A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.

  7. Scale-dependent intrinsic entropies of complex time series.

    PubMed

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  8. CLOSED-CYCLE TEXTILE DYEING: FULL-SCALE HYPERFILTRATION DEMONSTRATION

    EPA Science Inventory

    The report gives results of a project of joining a full-scale dynamic-membrane hyperfiltration (HF) system with an operating dye range. (HF is a membrane separation technique that has been used successfully to desalinate natural water. The dye range is a multi-purpose unit with a...

  9. From Solidification Processing to Microstructure to Mechanical Properties: A Multi-scale X-ray Study of an Al-Cu Alloy Sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourret, D.; Mertens, J. C. E.; Lieberman, E.

    We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure,more » supported by quantitative simulations of microstructure formation and its mechanical behavior.« less

  10. From Solidification Processing to Microstructure to Mechanical Properties: A Multi-scale X-ray Study of an Al-Cu Alloy Sample

    DOE PAGES

    Tourret, D.; Mertens, J. C. E.; Lieberman, E.; ...

    2017-09-13

    We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure,more » supported by quantitative simulations of microstructure formation and its mechanical behavior.« less

  11. From Solidification Processing to Microstructure to Mechanical Properties: A Multi-scale X-ray Study of an Al-Cu Alloy Sample

    NASA Astrophysics Data System (ADS)

    Tourret, D.; Mertens, J. C. E.; Lieberman, E.; Imhoff, S. D.; Gibbs, J. W.; Henderson, K.; Fezzaa, K.; Deriy, A. L.; Sun, T.; Lebensohn, R. A.; Patterson, B. M.; Clarke, A. J.

    2017-11-01

    We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure, supported by quantitative simulations of microstructure formation and its mechanical behavior.

  12. Application of the Geophysical Scale Multi-Block Transport Modeling System to Hydrodynamic Forcing of Dredged Material Placement Sediment Transport within the James River Estuary

    NASA Astrophysics Data System (ADS)

    Kim, S. C.; Hayter, E. J.; Pruhs, R.; Luong, P.; Lackey, T. C.

    2016-12-01

    The geophysical scale circulation of the Mid Atlantic Bight and hydrologic inputs from adjacent Chesapeake Bay watersheds and tributaries influences the hydrodynamics and transport of the James River estuary. Both barotropic and baroclinic transport govern the hydrodynamics of this partially stratified estuary. Modeling the placement of dredged sediment requires accommodating this wide spectrum of atmospheric and hydrodynamic scales. The Geophysical Scale Multi-Block (GSMB) Transport Modeling System is a collection of multiple well established and USACE approved process models. Taking advantage of the parallel computing capability of multi-block modeling, we performed one year three-dimensional modeling of hydrodynamics in supporting simulation of dredged sediment placements transport and morphology changes. Model forcing includes spatially and temporally varying meteorological conditions and hydrological inputs from the watershed. Surface heat flux estimates were derived from the National Solar Radiation Database (NSRDB). The open water boundary condition for water level was obtained from an ADCIRC model application of the U. S. East Coast. Temperature-salinity boundary conditions were obtained from the Environmental Protection Agency (EPA) Chesapeake Bay Program (CBP) long-term monitoring stations database. Simulated water levels were calibrated and verified by comparison with National Oceanic and Atmospheric Administration (NOAA) tide gage locations. A harmonic analysis of the modeled tides was performed and compared with NOAA tide prediction data. In addition, project specific circulation was verified using US Army Corps of Engineers (USACE) drogue data. Salinity and temperature transport was verified at seven CBP long term monitoring stations along the navigation channel. Simulation and analysis of model results suggest that GSMB is capable of resolving the long duration, multi-scale processes inherent to practical engineering problems such as dredged material placement stability.

  13. INTERDEPENDENCIES OF MULTI-POLLUTANT CONTROL SIMULATIONS IN AN AIR QUALITY MODEL

    EPA Science Inventory

    In this work, we use the Community Multi-Scale Air Quality (CMAQ) modeling system to examine the effect of several control strategies on simultaneous concentrations of ozone, PM2.5, and three important HAPs: formaldehyde, acetaldehyde and benzene.

  14. Analysis of the Empathic Concern Subscale of the Emotional Response Questionnaire in a Study Evaluating the Impact of a 3D Cultural Simulation.

    PubMed

    Everson, Naleya; Levett-Jones, Tracy; Pitt, Victoria; Lapkin, Samuel; Van Der Riet, Pamela; Rossiter, Rachel; Jones, Donovan; Gilligan, Conor; Courtney Pratt, Helen

    2018-04-25

    Abstract Background Empathic concern has been found to decline in health professional students. Few effective educational programs and a lack of validated scales are reported. Previous analysis of the Empathic Concern scale of the Emotional Response Questionnaire has reported both one and two latent constructs. Aim To evaluate the impact of simulation on nursing students' empathic concern and test the psychometric properties of the Empathic Concern scale. Methods The study used a one group pre-test post-test design with a convenience sample of 460 nursing students. Empathic concern was measured pre-post simulation with the Empathic Concern scale. Factor Analysis was undertaken to investigate the structure of the scale. Results There was a statistically significant increase in Empathic Concern scores between pre-simulation 5.57 (SD = 1.04) and post-simulation 6.10 (SD = 0.95). Factor analysis of the Empathic Concern scale identified one latent dimension. Conclusion Immersive simulation may promote empathic concern. The Empathic Concern scale measured a single latent construct in this cohort.

  15. A History of Full-Scale Aircraft and Rotorcraft Crash Testing and Simulation at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Boitnott, Richard L.; Fasanella, Edwin L.; Jones, Lisa E.; Lyle, Karen H.

    2004-01-01

    This paper summarizes 2-1/2 decades of full-scale aircraft and rotorcraft crash testing performed at the Impact Dynamics Research Facility (IDRF) located at NASA Langley Research Center in Hampton, Virginia. The IDRF is a 240-ft.-high steel gantry that was built originally as a lunar landing simulator facility in the early 1960's. It was converted into a full-scale crash test facility for light aircraft and rotorcraft in the early 1970 s. Since the first full-scale crash test was preformed in February 1974, the IDRF has been used to conduct: 41 full-scale crash tests of General Aviation (GA) aircraft including landmark studies to establish baseline crash performance data for metallic and composite GA aircraft; 11 full-scale crash tests of helicopters including crash qualification tests of the Bell and Sikorsky Advanced Composite Airframe Program (ACAP) prototypes; 48 Wire Strike Protection System (WSPS) qualification tests of Army helicopters; 3 vertical drop tests of Boeing 707 transport aircraft fuselage sections; and, 60+ crash tests of the F-111 crew escape module. For some of these tests, nonlinear transient dynamic codes were utilized to simulate the impact response of the airframe. These simulations were performed to evaluate the capabilities of the analytical tools, as well as to validate the models through test-analysis correlation. In September 2003, NASA Langley closed the IDRF facility and plans are underway to demolish it in 2007. Consequently, it is important to document the contributions made to improve the crashworthiness of light aircraft and rotorcraft achieved through full-scale crash testing and simulation at the IDRF.

  16. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19

    NASA Astrophysics Data System (ADS)

    Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph

    2016-09-01

    The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Using horizontal grid spacings of O(1km), convection-resolving weather and climate models allows one to explicitly resolve deep convection. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in supercomputing have led to new hybrid node designs, mixing conventional multi-core hardware and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to these architectures is the COSMO (Consortium for Small-scale Modeling) model.Here we present the convection-resolving COSMO model on continental scales using a version of the model capable of using GPU accelerators. The verification of a week-long simulation containing winter storm Kyrill shows that, for this case, convection-parameterizing simulations and convection-resolving simulations agree well. Furthermore, we demonstrate the applicability of the approach to longer simulations by conducting a 3-month-long simulation of the summer season 2006. Its results corroborate the findings found on smaller domains such as more credible representation of the diurnal cycle of precipitation in convection-resolving models and a tendency to produce more intensive hourly precipitation events. Both simulations also show how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. This includes the formation of sharp cold frontal structures, convection embedded in fronts and small eddies, or the formation and organization of propagating cold pools. Finally, we assess the performance gain from using heterogeneous hardware equipped with GPUs relative to multi-core hardware. With the COSMO model, we now use a weather and climate model that has all the necessary modules required for real-case convection-resolving regional climate simulations on GPUs.

  17. High fidelity, low cost moulage as a valid simulation tool to improve burns education.

    PubMed

    Pywell, M J; Evgeniou, E; Highway, K; Pitt, E; Estela, C M

    2016-06-01

    Simulation allows the opportunity for repeated practice in controlled, safe conditions. Moulage uses materials such as makeup to simulate clinical presentations. Moulage fidelity can be assessed by face validity (realism) and content validity (appropriateness). The aim of this project is to compare the fidelity of professional moulage to non-professional moulage in the context of a burns management course. Four actors were randomly assigned to a professional make-up artist or a course faculty member for moulage preparation such that two actors were in each group. Participants completed the actor-based burn management scenarios and answered a ten-question Likert-scale questionnaire on face and content validity. Mean scores and a linear mixed effects model were used to compare professional and non-professional moulage. Cronbach's alpha assessed internal consistency. Twenty participants experienced three out of four scenarios and at the end of the course completed a total of 60 questionnaires. Professional moulage had higher average ratings for face (4.30 v 3.80; p=0.11) and content (4.30 v 4.00; p=0.06) validity. Internal consistency of face (α=0.91) and content (α=0.85) validity questions was very good. The fidelity of professionally prepared moulage, as assessed by content validity, was higher than non-professionally prepared moulage. We have shown that using professional techniques and low cost materials we can prepare quality high fidelity moulage simulations. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  18. 49 CFR Appendix A to Part 239 - Schedule of Civil Penalties 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... emergency responders to participate in emergency simulations 3,000 6,000 (iii) Distribution of applicable... awareness information 3,500 7,000 239.103Failure to conduct a required full-scale simulation in accordance... debriefing and critique session after an emergency or full-scale simulation 4,000 7,500 (d)(1) Failure to...

  19. 49 CFR Appendix A to Part 239 - Schedule of Civil Penalties 1

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... emergency responders to participate in emergency simulations 3,000 6,000 (iii) Distribution of applicable... awareness information 3,500 7,000 239.103Failure to conduct a required full-scale simulation in accordance... debriefing and critique session after an emergency or full-scale simulation 4,000 7,500 (d)(1) Failure to...

  20. 49 CFR Appendix A to Part 239 - Schedule of Civil Penalties 1

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... emergency responders to participate in emergency simulations 3,000 6,000 (iii) Distribution of applicable... passengers with disabilities 2,500 5,000 239.103Failure to conduct a required full-scale simulation in... debriefing and critique session after an emergency or full-scale simulation 4,000 7,500 (c) Failure to design...

  1. 49 CFR Appendix A to Part 239 - Schedule of Civil Penalties 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... emergency responders to participate in emergency simulations 3,000 6,000 (iii) Distribution of applicable... awareness information 3,500 7,000 239.103Failure to conduct a required full-scale simulation in accordance... debriefing and critique session after an emergency or full-scale simulation 4,000 7,500 (d)(1) Failure to...

  2. 49 CFR Appendix A to Part 239 - Schedule of Civil Penalties 1

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... emergency responders to participate in emergency simulations 3,000 6,000 (iii) Distribution of applicable... awareness information 3,500 7,000 239.103Failure to conduct a required full-scale simulation in accordance... debriefing and critique session after an emergency or full-scale simulation 4,000 7,500 (d)(1) Failure to...

  3. Advanced Computation in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Tang, William

    2001-10-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  4. Fully non-linear multi-species Fokker-Planck-Landau collisions for gyrokinetic particle-in-cell simulations of fusion plasma

    NASA Astrophysics Data System (ADS)

    Hager, Robert; Yoon, E. S.; Ku, S.; D'Azevedo, E. F.; Worley, P. H.; Chang, C. S.

    2015-11-01

    We describe the implementation, and application of a time-dependent, fully nonlinear multi-species Fokker-Planck-Landau collision operator based on the single-species work of Yoon and Chang [Phys. Plasmas 21, 032503 (2014)] in the full-function gyrokinetic particle-in-cell codes XGC1 [Ku et al., Nucl. Fusion 49, 115021 (2009)] and XGCa. XGC simulations include the pedestal and scrape-off layer, where significant deviations of the particle distribution function from a Maxwellian can occur. Thus, in order to describe collisional effects on neoclassical and turbulence physics accurately, the use of a non-linear collision operator is a necessity. Our collision operator is based on a finite volume method using the velocity-space distribution functions sampled from the marker particles. Since the same fine configuration space mesh is used for collisions and the Poisson solver, the workload due to collisions can be comparable to or larger than the workload due to particle motion. We demonstrate that computing time spent on collisions can be kept affordable by applying advanced parallelization strategies while conserving mass, momentum, and energy to reasonable accuracy. We also show results of production scale XGCa simulations in the H-mode pedestal and compare to conventional theory. Work supported by US DOE OFES and OASCR.

  5. Mathematical modeling of the crack growth in linear elastic isotropic materials by conventional fracture mechanics approaches and by molecular dynamics method: crack propagation direction angle under mixed mode loading

    NASA Astrophysics Data System (ADS)

    Stepanova, Larisa; Bronnikov, Sergej

    2018-03-01

    The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.

  6. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  7. Mathematical modeling and full-scale shaking table tests for multi-curve buckling restrained braces

    NASA Astrophysics Data System (ADS)

    Tsai, C. S.; Lin, Yungchang; Chen, Wenshin; Su, H. C.

    2009-09-01

    Buckling restrained braces (BRBs) have been widely applied in seismic mitigation since they were introduced in the 1970s. However, traditional BRBs have several disadvantages caused by using a steel tube to envelope the mortar to prevent the core plate from buckling, such as: complex interfaces between the materials used, uncertain precision, and time consumption during the manufacturing processes. In this study, a new device called the multi-curve buckling restrained brace (MC-BRB) is proposed to overcome these disadvantages. The new device consists of a core plate with multiple neck portions assembled to form multiple energy dissipation segments, and the enlarged segment, lateral support elements and constraining elements to prevent the BRB from buckling. The enlarged segment located in the middle of the core plate can be welded to the lateral support and constraining elements to increase buckling resistance and to prevent them from sliding during earthquakes. Component tests and a series of shaking table tests on a full-scale steel structure equipped with MC-BRBs were carried out to investigate the behavior and capability of this new BRB design for seismic mitigation. The experimental results illustrate that the MC-BRB possesses a stable mechanical behavior under cyclic loadings and provides good protection to structures during earthquakes. Also, a mathematical model has been developed to simulate the mechanical characteristics of BRBs.

  8. Testing of the Multi-Fluid Evaporator Engineering Development Unit

    NASA Technical Reports Server (NTRS)

    Quinn, Gregory; O'Connor, Ed; Riga, Ken; Anderson, Molly; Westheimer, David

    2007-01-01

    Hamilton Sundstrand is under contract with the NASA Johnson Space Center to develop a scalable, evaporative heat rejection system called the Multi-Fluid Evaporator (MFE). It is being designed to support the Orion Crew Module and to support future Constellation missions. The MFE would be used from Earth sea level conditions to the vacuum of space. The current Shuttle configuration utilizes an ammonia boiler and flash evaporator system to achieve cooling at all altitudes. The MFE system combines both functions into a single compact package with significant weight reduction and improved freeze-up protection. The heat exchanger core is designed so that radial flow of the evaporant provides increasing surface area to keep the back pressure low. The multiple layer construction of the core allows for efficient scale up to the desired heat rejection rate. The full scale MFE prototype will be constructed with four core sections that, combined with a novel control scheme, manage the risk of freezing the heat exchanger cores. A sub-scale MFE engineering development unit (EDU) has been built, and is identical to one of the four sections of a full scale prototype. The EDU has completed testing at Hamilton Sundstrand. The overall test objective was to determine the thermal performance of the EDU. The first set of tests simulated how each of the four sections of the prototype would perform by varying the chamber pressure, evaporant flow rate, coolant flow rate and coolant temperature. A second set of tests was conducted with an outlet steam header in place to verify that the outlet steam orifices prevent freeze-up in the core while also allowing the desired thermal turn-down ratio. This paper discusses the EDU tests and results.

  9. Fully kinetic 3D simulations of the Hermean magnetosphere under realistic conditions: a new approach

    NASA Astrophysics Data System (ADS)

    Amaya, Jorge; Gonzalez-Herrero, Diego; Lembège, Bertrand; Lapenta, Giovanni

    2017-04-01

    Simulations of the magnetosphere of planets are usually performed using the MHD and the hybrid approaches. However, these two methods still rely on approximations for the computation of the pressure tensor, and require the neutrality of the plasma at every point of the domain by construction. These approximations undermine the role of electrons on the emergence of plasma features in the magnetosphere of planets. The high mobility of electrons, their characteristic time and space scales, and the lack of perfect neutrality, are the source of many observed phenomena in the magnetospheres, including the turbulence energy cascade, the magnetic reconnection, the particle acceleration in the shock front and the formation of current systems around the magnetosphere. Fully kinetic codes are extremely demanding of computing time, and have been unable to perform simulations of the full magnetosphere at the real scales of a planet with realistic plasma conditions. This is caused by two main reasons: 1) explicit codes must resolve the electron scales limiting the time and space discretisation, and 2) current versions of semi-implicit codes are unstable for cell sizes larger than a few Debye lengths. In this work we present new simulations performed with ECsim, an Energy Conserving semi-implicit method [1], that can overcome these two barriers. We compare the solutions obtained with ECsim with the solutions obtained by the classic semi-implicit code iPic3D [2]. The new simulations with ECsim demand a larger computational effort, but the time and space discretisations are larger than those in iPic3D allowing for a faster simulation time of the full planetary environment. The new code, ECsim, can reach a resolution allowing the capture of significant large scale physics without loosing kinetic electron information, such as wave-electron interaction and non-Maxwellian electron velocity distributions [3]. The code is able to better capture the thickness of the different boundary layers of the magnetosphere of Mercury. Electron kinetics are consistent with the spatial and temporal scale resolutions. Simulations are compared with measurements from the MESSENGER spacecraft showing a better fit when compared against the classic fully kinetic code iPic3D. These results show that the new generation of Energy Conserving semi-implicit codes can be used for an accurate analysis and interpretation of particle data from magnetospheric missions like BepiColombo and MMS, including electron velocity distributions and electron temperature anisotropies. [1] Lapenta, G. (2016). Exactly Energy Conserving Implicit Moment Particle in Cell Formulation. arXiv preprint arXiv:1602.06326. [2] Markidis, S., & Lapenta, G. (2010). Multi-scale simulations of plasma with iPIC3D. Mathematics and Computers in Simulation, 80(7), 1509-1519. [3] Lapenta, G., Gonzalez-Herrero, D., & Boella, E. (2016). Multiple scale kinetic simulations with the energy conserving semi implicit particle in cell (PIC) method. arXiv preprint arXiv:1612.08289.

  10. Multiscale Modeling in the Clinic: Drug Design and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, Colleen E.; An, Gary; Cannon, William R.

    A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less

  11. Full PIC simulations of solar radio emission

    NASA Astrophysics Data System (ADS)

    Sgattoni, A.; Henri, P.; Briand, C.; Amiranoff, F.; Riconda, C.

    2017-12-01

    Solar radio emissions are electromagnetic (EM) waves emitted in the solar wind plasma as a consequence of electron beams accelerated during solar flares or interplanetary shocks such as ICMEs. To describe their origin, a multi-stage model has been proposed in the 60s which considers a succession of non-linear three-wave interaction processes. A good understanding of the process would allow to infer the kinetic energy transfered from the electron beam to EM waves, so that the radio waves recorded by spacecraft can be used as a diagnostic for the electron beam.Even if the electrostatic problem has been extensively studied, full electromagnetic simulations were attempted only recently. Our large scale 2D-3V electromagnetic PIC simulations allow to identify the generation of both electrostatic and EM waves originated by the succession of plasma instabilities. We tested several configurations varying the electron beam density and velocity considering a background plasma of uniform density. For all the tested configurations approximately 105 of the electron-beam kinetic energy is transfered into EM waves emitted in all direction nearly isotropically. With this work we aim to design experiments of laboratory astrophysics to reproduce the electromagnetic emission process and test its efficiency.

  12. Wind turbine rotor blade monitoring using digital image correlation: a comparison to aeroelastic simulations of a multi-megawatt wind turbine

    NASA Astrophysics Data System (ADS)

    Winstroth, J.; Schoen, L.; Ernst, B.; Seume, J. R.

    2014-06-01

    Optical full-field measurement methods such as Digital Image Correlation (DIC) provide a new opportunity for measuring deformations and vibrations with high spatial and temporal resolution. However, application to full-scale wind turbines is not trivial. Elaborate preparation of the experiment is vital and sophisticated post processing of the DIC results essential. In the present study, a rotor blade of a 3.2 MW wind turbine is equipped with a random black-and-white dot pattern at four different radial positions. Two cameras are located in front of the wind turbine and the response of the rotor blade is monitored using DIC for different turbine operations. In addition, a Light Detection and Ranging (LiDAR) system is used in order to measure the wind conditions. Wind fields are created based on the LiDAR measurements and used to perform aeroelastic simulations of the wind turbine by means of advanced multibody codes. The results from the optical DIC system appear plausible when checked against common and expected results. In addition, the comparison of relative out-ofplane blade deflections shows good agreement between DIC results and aeroelastic simulations.

  13. Simplified energy-balance model for pragmatic multi-dimensional device simulation

    NASA Astrophysics Data System (ADS)

    Chang, Duckhyun; Fossum, Jerry G.

    1997-11-01

    To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.

  14. Comparison of Hyperspectral and Multispectral Satellites for Discriminating Land Cover in Northern California

    NASA Astrophysics Data System (ADS)

    Clark, M. L.; Kilham, N. E.

    2015-12-01

    Land-cover maps are important science products needed for natural resource and ecosystem service management, biodiversity conservation planning, and assessing human-induced and natural drivers of land change. Most land-cover maps at regional to global scales are produced with remote sensing techniques applied to multispectral satellite imagery with 30-500 m pixel sizes (e.g., Landsat, MODIS). Hyperspectral, or imaging spectrometer, imagery measuring the visible to shortwave infrared regions (VSWIR) of the spectrum have shown impressive capacity to map plant species and coarser land-cover associations, yet techniques have not been widely tested at regional and greater spatial scales. The Hyperspectral Infrared Imager (HyspIRI) mission is a VSWIR hyperspectral and thermal satellite being considered for development by NASA. The goal of this study was to assess multi-temporal, HyspIRI-like satellite imagery for improved land cover mapping relative to multispectral satellites. We mapped FAO Land Cover Classification System (LCCS) classes over 22,500 km2 in the San Francisco Bay Area, California using 30-m HyspIRI, Landsat 8 and Sentinel-2 imagery simulated from data acquired by NASA's AVIRIS airborne sensor. Random Forests (RF) and Multiple-Endmember Spectral Mixture Analysis (MESMA) classifiers were applied to the simulated images and accuracies were compared to those from real Landsat 8 images. The RF classifier was superior to MESMA, and multi-temporal data yielded higher accuracy than summer-only data. With RF, hyperspectral data had overall accuracy of 72.2% and 85.1% with full 20-class and reduced 12-class schemes, respectively. Multispectral imagery had lower accuracy. For example, simulated and real Landsat data had 7.5% and 4.6% lower accuracy than HyspIRI data with 12 classes, respectively. In summary, our results indicate increased mapping accuracy using HyspIRI multi-temporal imagery, particularly in discriminating different natural vegetation types, such as spectrally-mixed woodlands and forests.

  15. Dynamic evaluation of two decades of WRF-CMAQ ozone simulations over the contiguous United States

    EPA Science Inventory

    Dynamic evaluation of the fully coupled Weather Research and Forecasting (WRF)– Community Multi-scale Air Quality (CMAQ) model ozone simulations over the contiguous United States (CONUS) using two decades of simulations covering the period from 1990 to 2010 is conducted to ...

  16. Dynamic evaluation of two decades of WRF-CMAQ ozone simulations over the contiguous United States

    EPA Science Inventory

    Dynamic evaluation of the fully coupled Weather Research and Forecasting (WRF)– Community Multi-scale Air Quality (CMAQ) model ozone simulations over the contiguous United States (CONUS) using two decades of simulations covering the period from 1990 to 2010 is conducted to assess...

  17. Mercury and methylmercury stream concentrations in a Coastal Plain watershed: A multi-scale simulation analysis

    EPA Science Inventory

    Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale me...

  18. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  19. Multi-Subband Ensemble Monte Carlo simulations of scaled GAA MOSFETs

    NASA Astrophysics Data System (ADS)

    Donetti, L.; Sampedro, C.; Ruiz, F. G.; Godoy, A.; Gamiz, F.

    2018-05-01

    We developed a Multi-Subband Ensemble Monte Carlo simulator for non-planar devices, taking into account two-dimensional quantum confinement. It couples self-consistently the solution of the 3D Poisson equation, the 2D Schrödinger equation, and the 1D Boltzmann transport equation with the Ensemble Monte Carlo method. This simulator was employed to study MOS devices based on ultra-scaled Gate-All-Around Si nanowires with diameters in the range from 4 nm to 8 nm with gate length from 8 nm to 14 nm. We studied the output and transfer characteristics, interpreting the behavior in the sub-threshold region and in the ON state in terms of the spatial charge distribution and the mobility computed with the same simulator. We analyzed the results, highlighting the contribution of different valleys and subbands and the effect of the gate bias on the energy and velocity profiles. Finally the scaling behavior was studied, showing that only the devices with D = 4nm maintain a good control of the short channel effects down to the gate length of 8nm .

  20. Analysis of the Effect of Interior Nudging on Temperature and Precipitation Distributions of Multi-year Regional Climate Simulations

    NASA Astrophysics Data System (ADS)

    Nolte, C. G.; Otte, T. L.; Bowden, J. H.; Otte, M. J.

    2010-12-01

    There is disagreement in the regional climate modeling community as to the appropriateness of the use of internal nudging. Some investigators argue that the regional model should be minimally constrained and allowed to respond to regional-scale forcing, while others have noted that in the absence of interior nudging, significant large-scale discrepancies develop between the regional model solution and the driving coarse-scale fields. These discrepancies lead to reduced confidence in the ability of regional climate models to dynamically downscale global climate model simulations under climate change scenarios, and detract from the usability of the regional simulations for impact assessments. The advantages and limitations of interior nudging schemes for regional climate modeling are investigated in this study. Multi-year simulations using the WRF model driven by reanalysis data over the continental United States at 36km resolution are conducted using spectral nudging, grid point nudging, and for a base case without interior nudging. The means, distributions, and inter-annual variability of temperature and precipitation will be evaluated in comparison to regional analyses.

  1. Interprofessional education and collaboration: A simulation-based learning experience focused on common and complementary skills in an acute care environment.

    PubMed

    Cunningham, S; Foote, L; Sowder, M; Cunningham, C

    2018-05-01

    The purpose of this mixed-methods study was to explore from the participant's perspective the influence of an interprofessional simulation-based learning experience on understanding the roles and responsibilities of healthcare professionals in the acute care setting, interprofessional collaboration, and communication. Participating students from two professional programs completed the Readiness for Interprofessional Learning Scale (RIPLS) prior to and following the simulation experience to explore the influence of the simulation experience on students' perceptions of readiness to learn together. A Wilcoxon signed rank analysis was performed for each of the four subscales of the RIPLS: shared learning (<.001), teamwork and collaboration (<.001), professional identity (.042), and roles and responsibilities (.001). In addition, participating students were invited to participate in focus group interviews to discuss the effectiveness of the simulation experience. Three key themes were discovered: interprofessional teamwork, discovering roles and responsibilities, and increased confidence in treatment skills. The integration of interprofessional education through a simulation-based learning experience within the nursing and physical therapy professional programs provided a positive experience for the students. Simulation-based learning experiences may provide an opportunity for institutions to collaborate and provide additional engagement with healthcare professions that may not be represented within a single institution.

  2. Evaluating 20th Century precipitation characteristics between multi-scale atmospheric models with different land-atmosphere coupling

    NASA Astrophysics Data System (ADS)

    Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.

    2016-12-01

    Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.

  3. Higher-level simulations of turbulent flows

    NASA Technical Reports Server (NTRS)

    Ferziger, J. H.

    1981-01-01

    The fundamentals of large eddy simulation are considered and the approaches to it are compared. Subgrid scale models and the development of models for the Reynolds-averaged equations are discussed as well as the use of full simulation in testing these models. Numerical methods used in simulating large eddies, the simulation of homogeneous flows, and results from full and large scale eddy simulations of such flows are examined. Free shear flows are considered with emphasis on the mixing layer and wake simulation. Wall-bounded flow (channel flow) and recent work on the boundary layer are also discussed. Applications of large eddy simulation and full simulation in meteorological and environmental contexts are included along with a look at the direction in which work is proceeding and what can be expected from higher-level simulation in the future.

  4. MP-Pic simulation of CFB riser with EMMS-based drag model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F.; Song, F.; Benyahia, S.

    2012-01-01

    MP-PIC (multi-phase particle in cell) method combined with the EMMS (energy minimization multi- scale) drag force model was implemented with the open source program MFIX to simulate the gas–solid flows in CFB (circulatingfluidizedbed) risers. Calculated solid flux by the EMMS drag agrees well with the experimental value; while the traditional homogeneous drag over-predicts this value. EMMS drag force model can also predict the macro-and meso-scale structures. Quantitative comparison of the results by the EMMS drag force model and the experimental measurements show high accuracy of the model. The effects of the number of particles per parcel and wall conditions onmore » the simulation results have also been investigated in the paper. This work proved that MP-PIC combined with the EMMS drag model can successfully simulate the fluidized flows in CFB risers and it serves as a candidate to realize real-time simulation of industrial processes in the future.« less

  5. Using Multi-scale Dynamic Rupture Models to Improve Ground Motion Estimates: ALCF-2 Early Science Program Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ely, Geoffrey P.

    2013-10-31

    This project uses dynamic rupture simulations to investigate high-frequency seismic energy generation. The relevant phenomena (frictional breakdown, shear heating, effective normal-stress fluctuations, material damage, etc.) controlling rupture are strongly interacting and span many orders of magnitude in spatial scale, requiring highresolution simulations that couple disparate physical processes (e.g., elastodynamics, thermal weakening, pore-fluid transport, and heat conduction). Compounding the computational challenge, we know that natural faults are not planar, but instead have roughness that can be approximated by power laws potentially leading to large, multiscale fluctuations in normal stress. The capacity to perform 3D rupture simulations that couple these processes willmore » provide guidance for constructing appropriate source models for high-frequency ground motion simulations. The improved rupture models from our multi-scale dynamic rupture simulations will be used to conduct physicsbased (3D waveform modeling-based) probabilistic seismic hazard analysis (PSHA) for California. These calculation will provide numerous important seismic hazard results, including a state-wide extended earthquake rupture forecast with rupture variations for all significant events, a synthetic seismogram catalog for thousands of scenario events and more than 5000 physics-based seismic hazard curves for California.« less

  6. 3D Vectorial Time Domain Computational Integrated Photonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, J S; Bond, T C; Koning, J M

    2007-02-16

    The design of integrated photonic structures poses considerable challenges. 3D-Time-Domain design tools are fundamental in enabling technologies such as all-optical logic, photonic bandgap sensors, THz imaging, and fast radiation diagnostics. Such technologies are essential to LLNL and WFO sponsors for a broad range of applications: encryption for communications and surveillance sensors (NSA, NAI and IDIV/PAT); high density optical interconnects for high-performance computing (ASCI); high-bandwidth instrumentation for NIF diagnostics; micro-sensor development for weapon miniaturization within the Stockpile Stewardship and DNT programs; and applications within HSO for CBNP detection devices. While there exist a number of photonics simulation tools on the market,more » they primarily model devices of interest to the communications industry. We saw the need to extend our previous software to match the Laboratory's unique emerging needs. These include modeling novel material effects (such as those of radiation induced carrier concentrations on refractive index) and device configurations (RadTracker bulk optics with radiation induced details, Optical Logic edge emitting lasers with lateral optical inputs). In addition we foresaw significant advantages to expanding our own internal simulation codes: parallel supercomputing could be incorporated from the start, and the simulation source code would be accessible for modification and extension. This work addressed Engineering's Simulation Technology Focus Area, specifically photonics. Problems addressed from the Engineering roadmap of the time included modeling the Auston switch (an important THz source/receiver), modeling Vertical Cavity Surface Emitting Lasers (VCSELs, which had been envisioned as part of fast radiation sensors), and multi-scale modeling of optical systems (for a variety of applications). We proposed to develop novel techniques to numerically solve the 3D multi-scale propagation problem for both the microchip laser logic devices as well as devices characterized by electromagnetic (EM) propagation in nonlinear materials with time-varying parameters. The deliverables for this project were extended versions of the laser logic device code Quench2D and the EM propagation code EMsolve with new modules containing the novel solutions incorporated by taking advantage of the existing software interface and structured computational modules. Our approach was multi-faceted since no single methodology can always satisfy the tradeoff between model runtime and accuracy requirements. We divided the problems to be solved into two main categories: those that required Full Wave Methods and those that could be modeled using Approximate Methods. Full Wave techniques are useful in situations where Maxwell's equations are not separable (or the problem is small in space and time), while approximate techniques can treat many of the remaining cases.« less

  7. The Australian Computational Earth Systems Simulator

    NASA Astrophysics Data System (ADS)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.

  8. The Desired Concept Maps and Goal Setting for Assessing Professionalism in Medicine.

    PubMed

    Guraya, Salman Y; Guraya, Shaista S; Mahabbat, Nehal Anam; Fallatah, Khulood Yahya; Al-Ahmadi, Bashaer Ahmad; Alalawi, Hadeel Hadi

    2016-05-01

    Due to the multi-dimensional characteristics of professionalism, no single assessment modality has shown to reliably assess professionalism. This review aims to describe some of the popular assessment tools that are being used to assess professionalism with a view to formulate a framework of assessment of professionalism in medicine. In December 2015, the online research databases of MEDLINE, the Educational Resources Information Center (ERIC), Elton Bryson Stephens Company (EBSCO), SCOPUS, OVID and PsychINFO were searched for full-text English language articles published during 2000 to 2015. MeSH terms "professionalism" AND "duty" AND "assessment" OR "professionalism behavioural" AND "professionalism-cognitive" were used. The research articles that assessed professionalism across medical fields along with other areas of competencies were included. A final list of 35 articles were selected for this review. Several assessment tools are available for assessing professionalism that includes, but not limited to, mini clinical evaluation exercise, standardised direct observation of procedural skills, professionalism mini-evaluation exercise, multi-source feedback and 360 degree evaluation, and case based discussions. Because professionalism is a complex construct, it is less likely that a single assessment strategy will adequately measure it. Since every single assessment tool has its own weaknesses, triangulation involving multiple tools can compensate the shortcomings associated with any single approach. Assessment of professionalism necessitates a combination of modalities at individual, interpersonal, societal, and institutional levels and should be accompanied by feedback and motivational reflection that will, in turn, lead to behaviour and identity formation. The assessment of professionalism in medicine should meet the criteria of validity, reliability, feasibility and acceptability. Educators are urged to enhance the depth and quality of assessment instruments in the existing medical curricula for ensuring validity and reliability of assessment tools for professionalism.

  9. Simulating the Response of a Composite Honeycomb Energy Absorber. Part 2; Full-Scale Impact Testing

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Annett, Martin S.; Jackson, Karen E.; Polanco, Michael A.

    2012-01-01

    NASA has sponsored research to evaluate an externally deployable composite honeycomb designed to attenuate loads in the event of a helicopter crash. The concept, designated the Deployable Energy Absorber (DEA), is an expandable Kevlar(Registered TradeMark) honeycomb. The DEA has a flexible hinge that allows the honeycomb to be stowed collapsed until needed during an emergency. Evaluation of the DEA began with material characterization of the Kevlar(Registered TradeMark)-129 fabric/epoxy, and ended with a full-scale crash test of a retrofitted MD-500 helicopter. During each evaluation phase, finite element models of the test articles were developed and simulations were performed using the dynamic finite element code, LS-DYNA(Registered TradeMark). The paper will focus on simulations of two full-scale impact tests involving the DEA, a mass-simulator and a full-scale crash of an instrumented MD-500 helicopter. Isotropic (MAT24) and composite (MAT58) material models, which were assigned to DEA shell elements, were compared. Based on simulations results, the MAT58 model showed better agreement with test.

  10. Beyond bureaucracy: emerging trends in social care informatics.

    PubMed

    Wastell, David; White, Sue

    2014-09-01

    Existing information technology systems in much of UK social care have been designed to serve the interests of the bureaucracy rather than supporting professional practice or improving services to the public. The ill-starred Integrated Children's System in statutory children's services is typical. The Integrated Children's System is a system for form-filling, micro-managing professional practice through an enforced regime of standard processes and time scales. In this article, we argue against this dominant design. We provide several examples where technology has enabled alternative modes of support for professional work, based on socio-technical principles. One such system is Patchwork, which describes itself as a 'Facebook for Social Work'; its aim is to support multi-professional teams working with vulnerable families. © The Author(s) 2013.

  11. The value of information in a multi-agent market model. The luck of the uninformed

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Scalas, E.; Huber, J.; Kirchler, M.

    2007-01-01

    We present an experimental and simulated model of a multi-agent stock market driven by a double auction order matching mechanism. Studying the effect of cumulative information on the performance of traders, we find a non monotonic relationship of net returns of traders as a function of information levels, both in the experiments and in the simulations. Particularly, averagely informed traders perform worse than the non informed and only traders with high levels of information (insiders) are able to beat the market. The simulations and the experiments reproduce many stylized facts of tick-by-tick stock-exchange data, such as fast decay of autocorrelation of returns, volatility clustering and fat-tailed distribution of returns. These results have an important message for everyday life. They can give a possible explanation why, on average, professional fund managers perform worse than the market index.

  12. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    NASA Astrophysics Data System (ADS)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  13. Mercury and methylmercury stream concentrations in a Coastal Plain watershed: a multi-scale simulation analysis.

    PubMed

    Knightes, C D; Golden, H E; Journey, C A; Davis, G M; Conrads, P A; Marvin-DiPasquale, M; Brigham, M E; Bradley, P M

    2014-04-01

    Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km(2)) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km(2) and 25 km(2)) and the encompassing watershed (79 km(2)). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport. Published by Elsevier Ltd.

  14. Hydrodynamic parameters estimation from self-potential data in a controlled full scale site

    NASA Astrophysics Data System (ADS)

    Chidichimo, Francesco; De Biase, Michele; Rizzo, Enzo; Masi, Salvatore; Straface, Salvatore

    2015-03-01

    A multi-physical approach developed for the hydrodynamic characterization of porous media using hydrogeophysical information is presented. Several pumping tests were performed in the Hydrogeosite Laboratory, a controlled full-scale site designed and constructed at the CNR-IMAA (Consiglio Nazionale delle Ricerche - Istituto di Metodologia per l'Analisi Ambientale), in Marsico Nuovo (Basilicata Region, Southern Italy), in order to obtain an intermediate stage between laboratory experiments and field survey. The facility consists of a pool, used to study water infiltration processes, to simulate the space and time dynamics of subsurface contamination phenomena, to improve and to find new relationship between geophysical and hydrogeological parameters, to test and to calibrate new geophysical techniques and instruments. Therefore, the Hydrogeosite Laboratory has the advantage of carrying out controlled experiments, like in a flow cell or sandbox, but at field comparable scale. The data collected during the experiments have been used to estimate the saturated hydraulic conductivity ks [ms-1] using a coupled inversion model working in transient conditions, made up of the modified Richards equation describing the water flow in a variably saturated porous medium and the Poisson equation providing the self-potential ϕ [V], which naturally occurs at points of the soil surface owing to the presence of an electric field produced by the motion of underground electrolytic fluids through porous systems. The result obtained by this multi-physical numerical approach, which removes all the approximations adopted in previous works, makes a useful instrument for real heterogeneous aquifer characterization and for predictive analysis of its behavior.

  15. Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Ricketson, Lee

    2013-10-01

    We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.

  16. A practical scale for Multi-Faceted Organizational Health Climate Assessment.

    PubMed

    Zweber, Zandra M; Henning, Robert A; Magley, Vicki J

    2016-04-01

    The current study sought to develop a practical scale to measure 3 facets of workplace health climate from the employee perspective as an important component of a healthy organization. The goal was to create a short, usable yet comprehensive scale that organizations and occupational health professionals could use to determine if workplace health interventions were needed. The proposed Multi-faceted Organizational Health Climate Assessment (MOHCA) scale assesses facets that correspond to 3 organizational levels: (a) workgroup, (b) supervisor, and (c) organization. Ten items were developed and tested on 2 distinct samples, 1 cross-organization and 1 within-organization. Exploratory and confirmatory factor analyses yielded a 9-item, hierarchical 3-factor structure. Tests confirmed MOHCA has convergent validity with related constructs, such as perceived organizational support and supervisor support, as well as discriminant validity with safety climate. Lastly, criterion-related validity was found between MOHCA and health-related outcomes. The multi-faceted nature of MOHCA provides a scale that has face validity and can be easily translated into practice, offering a means for diagnosing the shortcomings of an organization or workgroup's health climate to better plan health and well-being interventions. (c) 2016 APA, all rights reserved).

  17. Multi-Scale Low-Entropy Method for Optimizing the Processing Parameters during Automated Fiber Placement

    PubMed Central

    Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong

    2017-01-01

    Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro–meso–scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy–enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy–enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates. PMID:28869520

  18. Multi-Scale Low-Entropy Method for Optimizing the Processing Parameters during Automated Fiber Placement.

    PubMed

    Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong

    2017-09-03

    Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro-meso-scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy-enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy-enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates.

  19. Toward Improving Predictability of Extreme Hydrometeorological Events: the Use of Multi-scale Climate Modeling in the Northern High Plains

    NASA Astrophysics Data System (ADS)

    Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.

    2014-12-01

    Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and Mexico. This in-progress research will ultimately contribute to integrate OLAM and VIC models and improve predictability of extreme hydrometeorological events.

  20. Inferring multi-scale neural mechanisms with brain network modelling

    PubMed Central

    Schirner, Michael; McIntosh, Anthony Randal; Jirsa, Viktor; Deco, Gustavo

    2018-01-01

    The neurophysiological processes underlying non-invasive brain activity measurements are incompletely understood. Here, we developed a connectome-based brain network model that integrates individual structural and functional data with neural population dynamics to support multi-scale neurophysiological inference. Simulated populations were linked by structural connectivity and, as a novelty, driven by electroencephalography (EEG) source activity. Simulations not only predicted subjects' individual resting-state functional magnetic resonance imaging (fMRI) time series and spatial network topologies over 20 minutes of activity, but more importantly, they also revealed precise neurophysiological mechanisms that underlie and link six empirical observations from different scales and modalities: (1) resting-state fMRI oscillations, (2) functional connectivity networks, (3) excitation-inhibition balance, (4, 5) inverse relationships between α-rhythms, spike-firing and fMRI on short and long time scales, and (6) fMRI power-law scaling. These findings underscore the potential of this new modelling framework for general inference and integration of neurophysiological knowledge to complement empirical studies. PMID:29308767

  1. Evaluation of Complex Human Performance: The Promise of Computer-Based Simulation

    ERIC Educational Resources Information Center

    Newsom, Robert S.; And Others

    1978-01-01

    For the training and placement of professional workers, multiple-choice instruments are the norm for wide-scale measurement and evaluation efforts. These instruments contain fundamental problems. Computer-based management simulations may provide solutions to these problems, appear scoreable and reliable, offer increased validity, and are better…

  2. Not just another multi-professional course! Part 2: nuts and bolts of designing a transformed curriculum for multi-professional learning.

    PubMed

    Mayers, Pat; Alperstein, Melanie; Duncan, Madeleine; Olckers, Lorna; Gibbs, Trevor

    2006-03-01

    Multi-professional education has traditionally aimed to develop health professionals who are able to collaborate effectively in comprehensive healthcare delivery. The respective professions learn about their differences in order to work together, rather than developing unity in their commitment to a shared vision of professionalism and service. In this, the second of two papers, the 'nuts and bolts' or practicalities of designing a transformed curriculum for a multi-professional course with a difference is described. Guidelines for the curriculum design process, which seeks to be innovative, grounded in theory and relevant to the learning of the students and the ultimately the health of the patients, include: valuing education; gaining buy-in; securing buy-out; defining of roles; seeking consensus; negotiating difference and expediting decisions. The phases of the design process are described, as well as the educational outcomes envisaged during the process. Reflections of the designers, in particular on what it means to be a multi-professional team, and a reconceptualization of multi-professional education are presented as challenges for educators of health professionals.

  3. Capability Description for NASA's F/A-18 TN 853 as a Testbed for the Integrated Resilient Aircraft Control Project

    NASA Technical Reports Server (NTRS)

    Hanson, Curt

    2009-01-01

    The NASA F/A-18 tail number (TN) 853 full-scale Integrated Resilient Aircraft Control (IRAC) testbed has been designed with a full array of capabilities in support of the Aviation Safety Program. Highlights of the system's capabilities include: 1) a quad-redundant research flight control system for safely interfacing controls experiments to the aircraft's control surfaces; 2) a dual-redundant airborne research test system for hosting multi-disciplinary state-of-the-art adaptive control experiments; 3) a robust reversionary configuration for recovery from unusual attitudes and configurations; 4) significant research instrumentation, particularly in the area of static loads; 5) extensive facilities for experiment simulation, data logging, real-time monitoring and post-flight analysis capabilities; and 6) significant growth capability in terms of interfaces and processing power.

  4. Simulation Studies of Mechanical Properties of Novel Silica Nano-structures

    NASA Astrophysics Data System (ADS)

    Muralidharan, Krishna; Torras Costa, Joan; Trickey, Samuel B.

    2006-03-01

    Advances in nanotechnology and the importance of silica as a technological material continue to stimulate computational study of the properties of possible novel silica nanostructures. Thus we have done classical molecular dynamics (MD) and multi-scale quantum mechanical (QM/MD) simulation studies of the mechanical properties of single-wall and multi-wall silica nano-rods of varying dimensions. Such nano-rods have been predicted by Mallik et al. to be unusually strong in tensile failure. Here we compare failure mechanisms of such nano-rods under tension, compression, and bending. The concurrent multi-scale QM/MD studies use the general PUPIL system (Torras et al.). In this case, PUPIL provides automated interoperation of the MNDO Transfer Hamiltonian QM code (Taylor et al.) and a locally written MD code. Embedding of the QM-forces domain is via the scheme of Mallik et al. Work supported by NSF ITR award DMR-0325553.

  5. Future Directions for Space Transportation and Propulsion at NASA

    NASA Technical Reports Server (NTRS)

    Sackheim, Robert L.

    2005-01-01

    Contents include the following: Oxygen Compatible Materials. Manufacturing Technology Demonstrations. Turbopump Inducer Waterflow Test. Turbine Damping "Whirligig" Test. Single Element Preburner and Main Injector Test. 40K Multi-Element Preburner and MI. Full-Scale Battleship Preburner. Prototype Preburner Test Article. Full-Scale Prototype TCA. Turbopump Hot-Fire Test Article. Prototype Engine. Validated Analytical Models.

  6. On the Scaling Laws and Similarity Spectra for Jet Noise in Subsonic and Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2008-01-01

    The scaling laws for the simulation of noise from subsonic and ideally expanded supersonic jets are reviewed with regard to their applicability to deduce full-scale conditions from small-scale model testing. Important parameters of scale model testing for the simulation of jet noise are identified, and the methods of estimating full- scale noise levels from simulated scale model data are addressed. The limitations of cold-jet data in estimating high-temperature supersonic jet noise levels are discussed. New results are presented showing the dependence of overall sound power level on the jet temperature ratio at various jet Mach numbers. A generalized similarity spectrum is also proposed, which accounts for convective Mach number and angle to the jet axis.

  7. Investigation of flow dynamics of liquid phase in a pilot-scale trickle bed reactor using radiotracer technique.

    PubMed

    Pant, H J; Sharma, V K

    2016-10-01

    A radiotracer investigation was carried out to measure residence time distribution (RTD) of liquid phase in a trickle bed reactor (TBR). The main objectives of the investigation were to investigate radial and axial mixing of the liquid phase, and evaluate performance of the liquid distributor/redistributor at different operating conditions. Mean residence times (MRTs), holdups (H) and fraction of flow flowing along different quadrants were estimated. The analysis of the measured RTD curves indicated radial non-uniform distribution of liquid phase across the beds. The overall RTD of the liquid phase, measured at the exit of the reactor was simulated using a multi-parameter axial dispersion with exchange model (ADEM), and model parameters were obtained. The results of model simulations indicated that the TBR behaved as a plug flow reactor at most of the operating conditions used in the investigation. The results of the investigation helped to improve the existing design as well as to design a full-scale industrial TBR for petroleum refining applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Termination Shock Transition in Multi-ion Multi-fluid MHD Models of the Heliosphere

    NASA Astrophysics Data System (ADS)

    Zieger, B.; Opher, M.; Toth, G.

    2013-12-01

    As evidenced by Voyager 2 observations, pickup ions (PUIs) play a significant role in the termination shock (TS) transition of the solar wind [Richardson et al., Nature, 2008]. Recent kinetic simulations [Ariad and Gedalin, JGR, 2013] came to the conclusion that the contribution of the high energy tail of PUIs is negligible at the shock transition. The Rankine-Hugoniot (R-H) relations are determined by the low energy body of PUIs. Particle-in-cell simulations by Wu et al. [JGR, 2010] have shown that the sum of the thermal solar wind and non-thermal PUI distributions downstream of the TS can be approximated with a 2-Maxwellian distribution. It is important to note that this 2-Maxwellian distribution neglects the suprathermal tail population that has a characteristic power-law distribution. These results justify the fluid description of PUIs in our large-scale multi-ion multi-fluid MHD simulations of the heliospheric interface [Prested et al., JGR, 2013; Zieger et al., GRL, 2013]. The closure of the multi-ion MHD equations could be implemented with separate momentum and energy equations for the different ion species (thermal solar wind and PUIs) where the transfer rate of momentum and energy between the two ion species are considered as source terms, like in Glocer et al. [JGR, 2009]. Another option is to solve for the total energy equation with an additional equation for the PUI pressure, as suggested by Fahr and Chalov [A&A, 2008]. In this paper, we validate the energy conservation and the R-H relations across the TS in different numerical implementations of our latest multi-ion multi-fluid MHD model. We assume an instantaneous pickup process, where the convection velocity of the two ion fluids are the same, and the so-called strong scattering approximation, where newly born PUIs attain their spherical shell distribution within a short distance on fluid scales (spatial scales much larger than the respective ion gyroradius).

  9. Field-scale multi-phase LNAPL remediation: Validating a new computational framework against sequential field pilot trials.

    PubMed

    Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B

    2018-03-05

    Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.

  10. The Desired Concept Maps and Goal Setting for Assessing Professionalism in Medicine

    PubMed Central

    Guraya, Shaista S.; Mahabbat, Nehal Anam; Fallatah, Khulood Yahya; Al-Ahmadi, Bashaer Ahmad; Alalawi, Hadeel Hadi

    2016-01-01

    Due to the multi-dimensional characteristics of professionalism, no single assessment modality has shown to reliably assess professionalism. This review aims to describe some of the popular assessment tools that are being used to assess professionalism with a view to formulate a framework of assessment of professionalism in medicine. In December 2015, the online research databases of MEDLINE, the Educational Resources Information Center (ERIC), Elton Bryson Stephens Company (EBSCO), SCOPUS, OVID and PsychINFO were searched for full-text English language articles published during 2000 to 2015. MeSH terms “professionalism” AND “duty” AND “assessment” OR “professionalism behavioural” AND “professionalism–cognitive” were used. The research articles that assessed professionalism across medical fields along with other areas of competencies were included. A final list of 35 articles were selected for this review. Several assessment tools are available for assessing professionalism that includes, but not limited to, mini clinical evaluation exercise, standardised direct observation of procedural skills, professionalism mini-evaluation exercise, multi-source feedback and 360 degree evaluation, and case based discussions. Because professionalism is a complex construct, it is less likely that a single assessment strategy will adequately measure it. Since every single assessment tool has its own weaknesses, triangulation involving multiple tools can compensate the shortcomings associated with any single approach. Assessment of professionalism necessitates a combination of modalities at individual, interpersonal, societal, and institutional levels and should be accompanied by feedback and motivational reflection that will, in turn, lead to behaviour and identity formation. The assessment of professionalism in medicine should meet the criteria of validity, reliability, feasibility and acceptability. Educators are urged to enhance the depth and quality of assessment instruments in the existing medical curricula for ensuring validity and reliability of assessment tools for professionalism. PMID:27437247

  11. A numerical model of a HIL scaled roller rig for simulation of wheel-rail degraded adhesion condition

    NASA Astrophysics Data System (ADS)

    Conti, Roberto; Meli, Enrico; Pugi, Luca; Malvezzi, Monica; Bartolini, Fabio; Allotta, Benedetto; Rindi, Andrea; Toni, Paolo

    2012-05-01

    Scaled roller rigs used for railway applications play a fundamental role in the development of new technologies and new devices, combining the hardware in the loop (HIL) benefits with the reduction of the economic investments. The main problem of the scaled roller rig with respect to the full scale ones is the improved complexity due to the scaling factors. For this reason, before building the test rig, the development of a software model of the HIL system can be useful to analyse the system behaviour in different operative conditions. One has to consider the multi-body behaviour of the scaled roller rig, the controller and the model of the virtual vehicle, whose dynamics has to be reproduced on the rig. The main purpose of this work is the development of a complete model that satisfies the previous requirements and in particular the performance analysis of the controller and of the dynamical behaviour of the scaled roller rig when some disturbances are simulated with low adhesion conditions. Since the scaled roller rig will be used to simulate degraded adhesion conditions, accurate and realistic wheel-roller contact model also has to be included in the model. The contact model consists of two parts: the contact point detection and the adhesion model. The first part is based on a numerical method described in some previous studies for the wheel-rail case and modified to simulate the three-dimensional contact between revolute surfaces (wheel-roller). The second part consists in the evaluation of the contact forces by means of the Hertz theory for the normal problem and the Kalker theory for the tangential problem. Some numerical tests were performed, in particular low adhesion conditions were simulated, and bogie hunting and dynamical imbalance of the wheelsets were introduced. The tests were devoted to verify the robustness of control system with respect to some of the more frequent disturbances that may influence the roller rig dynamics. In particular we verified that the wheelset imbalance could significantly influence system performance, and to reduce the effect of this disturbance a multistate filter was designed.

  12. Simulating multi-scale oceanic processes around Taiwan on unstructured grids

    NASA Astrophysics Data System (ADS)

    Yu, Hao-Cheng; Zhang, Yinglong J.; Yu, Jason C. S.; Terng, C.; Sun, Weiling; Ye, Fei; Wang, Harry V.; Wang, Zhengui; Huang, Hai

    2017-11-01

    We validate a 3D unstructured-grid (UG) model for simulating multi-scale processes as occurred in Northwestern Pacific around Taiwan using recently developed new techniques (Zhang et al., Ocean Modeling, 102, 64-81, 2016) that require no bathymetry smoothing even for this region with prevalent steep bottom slopes and many islands. The focus is on short-term forecast for several months instead of long-term variability. Compared with satellite products, the errors for the simulated Sea-surface Height (SSH) and Sea-surface Temperature (SST) are similar to a reference data-assimilated global model. In the nearshore region, comparison with 34 tide gauges located around Taiwan indicates an average RMSE of 13 cm for the tidal elevation. The average RMSE for SST at 6 coastal buoys is 1.2 °C. The mean transport and eddy kinetic energy compare reasonably with previously published values and the reference model used to provide boundary and initial conditions. The model suggests ∼2-day interruption of Kuroshio east of Taiwan during a typhoon period. The effect of tidal mixing is shown to be significant nearshore. The multi-scale model is easily extendable to target regions of interest due to its UG framework and a flexible vertical gridding system, which is shown to be superior to terrain-following coordinates.

  13. Simulation of DKIST solar adaptive optics system

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Carlisle, Elizabeth; Schmidt, Dirk

    2016-07-01

    Solar adaptive optics (AO) simulations are a valuable tool to guide the design and optimization process of current and future solar AO and multi-conjugate AO (MCAO) systems. Solar AO and MCAO systems rely on extended object cross-correlating Shack-Hartmann wavefront sensors to measure the wavefront. Accurate solar AO simulations require computationally intensive operations, which have until recently presented a prohibitive computational cost. We present an update on the status of a solar AO and MCAO simulation tool being developed at the National Solar Observatory. The simulation tool is a multi-threaded application written in the C++ language that takes advantage of current large multi-core CPU computer systems and fast ethernet connections to provide accurate full simulation of solar AO and MCAO systems. It interfaces with KAOS, a state of the art solar AO control software developed by the Kiepenheuer-Institut fuer Sonnenphysik, that provides reliable AO control. We report on the latest results produced by the solar AO simulation tool.

  14. Numerical models for fluid-grains interactions: opportunities and limitations

    NASA Astrophysics Data System (ADS)

    Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony

    2017-06-01

    In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.

  15. Multi-scale evaporator architectures for geothermal binary power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabau, Adrian S; Nejad, Ali; Klett, James William

    2016-01-01

    In this paper, novel geometries of heat exchanger architectures are proposed for evaporators that are used in Organic Rankine Cycles. A multi-scale heat exchanger concept was developed by employing successive plenums at several length-scale levels. Flow passages contain features at both macro-scale and micro-scale, which are designed from Constructal Theory principles. Aside from pumping power and overall thermal resistance, several factors were considered in order to fully assess the performance of the new heat exchangers, such as weight of metal structures, surface area per unit volume, and total footprint. Component simulations based on laminar flow correlations for supercritical R134a weremore » used to obtain performance indicators.« less

  16. Assessing teamwork performance in obstetrics: A systematic search and review of validated tools.

    PubMed

    Fransen, Annemarie F; de Boer, Liza; Kienhorst, Dieneke; Truijens, Sophie E; van Runnard Heimel, Pieter J; Oei, S Guid

    2017-09-01

    Teamwork performance is an essential component for the clinical efficiency of multi-professional teams in obstetric care. As patient safety is related to teamwork performance, it has become an important learning goal in simulation-based education. In order to improve teamwork performance, reliable assessment tools are required. These can be used to provide feedback during training courses, or to compare learning effects between different types of training courses. The aim of the current study is to (1) identify the available assessment tools to evaluate obstetric teamwork performance in a simulated environment, and (2) evaluate their psychometric properties in order to identify the most valuable tool(s) to use. We performed a systematic search in PubMed, MEDLINE, and EMBASE to identify articles describing assessment tools for the evaluation of obstetric teamwork performance in a simulated environment. In order to evaluate the quality of the identified assessment tools the standards and grading rules have been applied as recommended by the Accreditation Council for Graduate Medical Education (ACGME) Committee on Educational Outcomes. The included studies were also assessed according to the Oxford Centre for Evidence Based Medicine (OCEBM) levels of evidence. This search resulted in the inclusion of five articles describing the following six tools: Clinical Teamwork Scale, Human Factors Rating Scale, Global Rating Scale, Assessment of Obstetric Team Performance, Global Assessment of Obstetric Team Performance, and the Teamwork Measurement Tool. Based on the ACGME guidelines we assigned a Class 3, level C of evidence, to all tools. Regarding the OCEBM levels of evidence, a level 3b was assigned to two studies and a level 4 to four studies. The Clinical Teamwork Scale demonstrated the most comprehensive validation, and the Teamwork Measurement Tool demonstrated promising results, however it is recommended to further investigate its reliability. Copyright © 2017. Published by Elsevier B.V.

  17. Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST

    NASA Astrophysics Data System (ADS)

    Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan

    2018-04-01

    We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.

  18. Towards Data-Driven Simulations of Wildfire Spread using Ensemble-based Data Assimilation

    NASA Astrophysics Data System (ADS)

    Rochoux, M. C.; Bart, J.; Ricci, S. M.; Cuenot, B.; Trouvé, A.; Duchaine, F.; Morel, T.

    2012-12-01

    Real-time predictions of a propagating wildfire remain a challenging task because the problem involves both multi-physics and multi-scales. The propagation speed of wildfires, also called the rate of spread (ROS), is indeed determined by complex interactions between pyrolysis, combustion and flow dynamics, atmospheric dynamics occurring at vegetation, topographical and meteorological scales. Current operational fire spread models are mainly based on a semi-empirical parameterization of the ROS in terms of vegetation, topographical and meteorological properties. For the fire spread simulation to be predictive and compatible with operational applications, the uncertainty on the ROS model should be reduced. As recent progress made in remote sensing technology provides new ways to monitor the fire front position, a promising approach to overcome the difficulties found in wildfire spread simulations is to integrate fire modeling and fire sensing technologies using data assimilation (DA). For this purpose we have developed a prototype data-driven wildfire spread simulator in order to provide optimal estimates of poorly known model parameters [*]. The data-driven simulation capability is adapted for more realistic wildfire spread : it considers a regional-scale fire spread model that is informed by observations of the fire front location. An Ensemble Kalman Filter algorithm (EnKF) based on a parallel computing platform (OpenPALM) was implemented in order to perform a multi-parameter sequential estimation where wind magnitude and direction are in addition to vegetation properties (see attached figure). The EnKF algorithm shows its good ability to track a small-scale grassland fire experiment and ensures a good accounting for the sensitivity of the simulation outcomes to the control parameters. As a conclusion, it was shown that data assimilation is a promising approach to more accurately forecast time-varying wildfire spread conditions as new airborne-like observations of the fire front location get available. [*] Rochoux, M.C., Delmotte, B., Cuenot, B., Ricci, S., and Trouvé, A. (2012) "Regional-scale simulations of wildland fire spread informed by real-time flame front observations", Proc. Combust. Inst., 34, in press http://dx.doi.org/10.1016/j.proci.2012.06.090 EnKF-based tracking of small-scale grassland fire experiment, with estimation of wind and fuel parameters.

  19. Multi-time scale Climate Informed Stochastic Hybrid Simulation-Optimization Model (McISH model) for Multi-Purpose Reservoir System

    NASA Astrophysics Data System (ADS)

    Lu, M.; Lall, U.

    2013-12-01

    In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.

  20. Multi-scale Modeling and Analysis of Nano-RFID Systems on HPC Setup

    NASA Astrophysics Data System (ADS)

    Pathak, Rohit; Joshi, Satyadhar

    In this paper we have worked out on some the complex modeling aspects such as Multi Scale modeling, MATLAB Sugar based modeling and have shown the complexities involved in the analysis of Nano RFID (Radio Frequency Identification) systems. We have shown the modeling and simulation and demonstrated some novel ideas and library development for Nano RFID. Multi scale modeling plays a very important role in nanotech enabled devices properties of which cannot be explained sometimes by abstraction level theories. Reliability and packaging still remains one the major hindrances in practical implementation of Nano RFID based devices. And to work on them modeling and simulation will play a very important role. CNTs is the future low power material that will replace CMOS and its integration with CMOS, MEMS circuitry will play an important role in realizing the true power in Nano RFID systems. RFID based on innovations in nanotechnology has been shown. MEMS modeling of Antenna, sensors and its integration in the circuitry has been shown. Thus incorporating this we can design a Nano-RFID which can be used in areas like human implantation and complex banking applications. We have proposed modeling of RFID using the concept of multi scale modeling to accurately predict its properties. Also we give the modeling of MEMS devices that are proposed recently that can see possible application in RFID. We have also covered the applications and the advantages of Nano RFID in various areas. RF MEMS has been matured and its devices are being successfully commercialized but taking it to limits of nano domains and integration with singly chip RFID needs a novel approach which is being proposed. We have modeled MEMS based transponder and shown the distribution for multi scale modeling for Nano RFID.

  1. Chondrocyte Deformations as a Function of Tibiofemoral Joint Loading Predicted by a Generalized High-Throughput Pipeline of Multi-Scale Simulations

    PubMed Central

    Sibole, Scott C.; Erdemir, Ahmet

    2012-01-01

    Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro-scale model providing application for other multi-scale continuum mechanics problems. PMID:22649535

  2. Core-Collapse Supernovae Explored by Multi-D Boltzmann Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Sumiyoshi, Kohsuke; Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun; Matsufuru, Hideo; Imakura, Akira; Yamada, Shoichi

    We report the latest results of numerical simulations of core-collapse supernovae by solving multi-D neutrino-radiation hydrodynamics with Boltzmann equations. One of the longstanding issues of the explosion mechanism of supernovae has been uncertainty in the approximations of the neutrino transfer in multi-D such as the diffusion approximation and ray-by-ray method. The neutrino transfer is essential, together with 2D/3D hydrodynamical instabilities, to evaluate the neutrino heating behind the shock wave for successful explosions and to predict the neutrino burst signals. We tackled this difficult problem by utilizing our solver of the 6D Boltzmann equation for neutrinos in 3D space and 3D neutrino momentum space coupled with multi-D hydrodynamics adding special and general relativistic extensions. We have performed a set of 2D core-collapse simulations from 11M ⊙ and 15M ⊙ stars on K-computer in Japan by following long-term evolution over 400 ms after bounce to reveal the outcome from the full Boltzmann hydrodynamic simulations with a sophisticated equation of state with multi-nuclear species and updated rates for electron captures on nuclei.

  3. A multi-species exchange model for fully fluctuating polymer field theory simulations.

    PubMed

    Düchs, Dominik; Delaney, Kris T; Fredrickson, Glenn H

    2014-11-07

    Field-theoretic models have been used extensively to study the phase behavior of inhomogeneous polymer melts and solutions, both in self-consistent mean-field calculations and in numerical simulations of the full theory capturing composition fluctuations. The models commonly used can be grouped into two categories, namely, species models and exchange models. Species models involve integrations of functionals that explicitly depend on fields originating both from species density operators and their conjugate chemical potential fields. In contrast, exchange models retain only linear combinations of the chemical potential fields. In the two-component case, development of exchange models has been instrumental in enabling stable complex Langevin (CL) simulations of the full complex-valued theory. No comparable stable CL approach has yet been established for field theories of the species type. Here, we introduce an extension of the exchange model to an arbitrary number of components, namely, the multi-species exchange (MSE) model, which greatly expands the classes of soft material systems that can be accessed by the complex Langevin simulation technique. We demonstrate the stability and accuracy of the MSE-CL sampling approach using numerical simulations of triblock and tetrablock terpolymer melts, and tetrablock quaterpolymer melts. This method should enable studies of a wide range of fluctuation phenomena in multiblock/multi-species polymer blends and composites.

  4. Facilitating higher-fidelity simulations of axial compressor instability and other turbomachinery flow conditions

    NASA Astrophysics Data System (ADS)

    Herrick, Gregory Paul

    The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.

  5. Using CellML with OpenCMISS to Simulate Multi-Scale Physiology

    PubMed Central

    Nickerson, David P.; Ladd, David; Hussan, Jagir R.; Safaei, Soroush; Suresh, Vinod; Hunter, Peter J.; Bradley, Christopher P.

    2014-01-01

    OpenCMISS is an open-source modeling environment aimed, in particular, at the solution of bioengineering problems. OpenCMISS consists of two main parts: a computational library (OpenCMISS-Iron) and a field manipulation and visualization library (OpenCMISS-Zinc). OpenCMISS is designed for the solution of coupled multi-scale, multi-physics problems in a general-purpose parallel environment. CellML is an XML format designed to encode biophysically based systems of ordinary differential equations and both linear and non-linear algebraic equations. A primary design goal of CellML is to allow mathematical models to be encoded in a modular and reusable format to aid reproducibility and interoperability of modeling studies. In OpenCMISS, we make use of CellML models to enable users to configure various aspects of their multi-scale physiological models. This avoids the need for users to be familiar with the OpenCMISS internal code in order to perform customized computational experiments. Examples of this are: cellular electrophysiology models embedded in tissue electrical propagation models; material constitutive relationships for mechanical growth and deformation simulations; time-varying boundary conditions for various problem domains; and fluid constitutive relationships and lumped-parameter models. In this paper, we provide implementation details describing how CellML models are integrated into multi-scale physiological models in OpenCMISS. The external interface OpenCMISS presents to users is also described, including specific examples exemplifying the extensibility and usability these tools provide the physiological modeling and simulation community. We conclude with some thoughts on future extension of OpenCMISS to make use of other community developed information standards, such as FieldML, SED-ML, and BioSignalML. Plans for the integration of accelerator code (graphical processing unit and field programmable gate array) generated from CellML models is also discussed. PMID:25601911

  6. Assessing the weighted multi-objective adaptive surrogate model optimization to derive large-scale reservoir operating rules with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao

    2017-01-01

    The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.

  7. Statistic inversion of multi-zone transition probability models for aquifer characterization in alluvial fans

    DOE PAGES

    Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...

    2015-06-12

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less

  9. Modeling Solar Wind Flow with the Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Pogorelov, N.V.; Borovikov, S. N.; Bedford, M. C.; ...

    2013-04-01

    Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) is a package of numerical codes capable of performing adaptive mesh refinement simulations of complex plasma flows in the presence of discontinuities and charge exchange between ions and neutral atoms. The flow of the ionized component is described with the ideal MHD equations, while the transport of atoms is governed either by the Boltzmann equation or multiple Euler gas dynamics equations. We have enhanced the code with additional physical treatments for the transport of turbulence and acceleration of pickup ions in the interplanetary space and at the termination shock. In this article, we present themore » results of our numerical simulation of the solar wind (SW) interaction with the local interstellar medium (LISM) in different time-dependent and stationary formulations. Numerical results are compared with the Ulysses, Voyager, and OMNI observations. Finally, the SW boundary conditions are derived from in-situ spacecraft measurements and remote observations.« less

  10. Multi-scale simulations of droplets in generic time-dependent flows

    NASA Astrophysics Data System (ADS)

    Milan, Felix; Biferale, Luca; Sbragaglia, Mauro; Toschi, Federico

    2017-11-01

    We study the deformation and dynamics of droplets in time-dependent flows using a diffuse interface model for two immiscible fluids. The numerical simulations are at first benchmarked against analytical results of steady droplet deformation, and further extended to the more interesting case of time-dependent flows. The results of these time-dependent numerical simulations are compared against analytical models available in the literature, which assume the droplet shape to be an ellipsoid at all times, with time-dependent major and minor axis. In particular we investigate the time-dependent deformation of a confined droplet in an oscillating Couette flow for the entire capillary range until droplet break-up. In this way these multi component simulations prove to be a useful tool to establish from ``first principles'' the dynamics of droplets in complex flows involving multiple scales. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 642069. & European Research Council under the European Community's Seventh Framework Program, ERC Grant Agreement No 339032.

  11. Realistic Modeling of Multi-Scale MHD Dynamics of the Solar Atmosphere

    NASA Technical Reports Server (NTRS)

    Kitiashvili, Irina; Mansour, Nagi N.; Wray, Alan; Couvidat, Sebastian; Yoon, Seokkwan; Kosovichev, Alexander

    2014-01-01

    Realistic 3D radiative MHD simulations open new perspectives for understanding the turbulent dynamics of the solar surface, its coupling to the atmosphere, and the physical mechanisms of generation and transport of non-thermal energy. Traditionally, plasma eruptions and wave phenomena in the solar atmosphere are modeled by prescribing artificial driving mechanisms using magnetic or gas pressure forces that might arise from magnetic field emergence or reconnection instabilities. In contrast, our 'ab initio' simulations provide a realistic description of solar dynamics naturally driven by solar energy flow. By simulating the upper convection zone and the solar atmosphere, we can investigate in detail the physical processes of turbulent magnetoconvection, generation and amplification of magnetic fields, excitation of MHD waves, and plasma eruptions. We present recent simulation results of the multi-scale dynamics of quiet-Sun regions, and energetic effects in the atmosphere and compare with observations. For the comparisons we calculate synthetic spectro-polarimetric data to model observational data of SDO, Hinode, and New Solar Telescope.

  12. Progress in Validation of Wind-US for Ramjet/Scramjet Combustion

    NASA Technical Reports Server (NTRS)

    Engblom, William A.; Frate, Franco C.; Nelson, Chris C.

    2005-01-01

    Validation of the Wind-US flow solver against two sets of experimental data involving high-speed combustion is attempted. First, the well-known Burrows- Kurkov supersonic hydrogen-air combustion test case is simulated, and the sensitively of ignition location and combustion performance to key parameters is explored. Second, a numerical model is developed for simulation of an X-43B candidate, full-scale, JP-7-fueled, internal flowpath operating in ramjet mode. Numerical results using an ethylene-air chemical kinetics model are directly compared against previously existing pressure-distribution data along the entire flowpath, obtained in direct-connect testing conducted at NASA Langley Research Center. Comparison to derived quantities such as burn efficiency and thermal throat location are also made. Reasonable to excellent agreement with experimental data is demonstrated for key parameters in both simulation efforts. Additional Wind-US feature needed to improve simulation efforts are described herein, including maintaining stagnation conditions at inflow boundaries for multi-species flow. An open issue regarding the sensitivity of isolator unstart to key model parameters is briefly discussed.

  13. The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation

    NASA Astrophysics Data System (ADS)

    Noh, Yookyung

    The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.

  14. Scale-invariance underlying the logistic equation and its social applications

    NASA Astrophysics Data System (ADS)

    Hernando, A.; Plastino, A.

    2013-01-01

    On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.

  15. An evaluation of noise reduction algorithms for particle-based fluid simulations in multi-scale applications

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.

    2016-11-01

    Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.

  16. PAM: Particle automata model in simulation of Fusarium graminearum pathogen expansion.

    PubMed

    Wcisło, Rafał; Miller, S Shea; Dzwinel, Witold

    2016-01-21

    The multi-scale nature and inherent complexity of biological systems are a great challenge for computer modeling and classical modeling paradigms. We present a novel particle automata modeling metaphor in the context of developing a 3D model of Fusarium graminearum infection in wheat. The system consisting of the host plant and Fusarium pathogen cells can be represented by an ensemble of discrete particles defined by a set of attributes. The cells-particles can interact with each other mimicking mechanical resistance of the cell walls and cell coalescence. The particles can move, while some of their attributes can be changed according to prescribed rules. The rules can represent cellular scales of a complex system, while the integrated particle automata model (PAM) simulates its overall multi-scale behavior. We show that due to the ability of mimicking mechanical interactions of Fusarium tip cells with the host tissue, the model is able to simulate realistic penetration properties of the colonization process reproducing both vertical and lateral Fusarium invasion scenarios. The comparison of simulation results with micrographs from laboratory experiments shows encouraging qualitative agreement between the two. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Information Presentation and Control in a Modern Air Traffic Control Tower Simulator

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Doubek, Sharon; Rabin, Boris; Harke, Stanton

    1996-01-01

    The proper presentation and management of information in America's largest and busiest (Level V) air traffic control towers calls for an in-depth understanding of many different human-computer considerations: user interface design for graphical, radar, and text; manual and automated data input hardware; information/display output technology; reconfigurable workstations; workload assessment; and many other related subjects. This paper discusses these subjects in the context of the Surface Development and Test Facility (SDTF) currently under construction at NASA's Ames Research Center, a full scale, multi-manned, air traffic control simulator which will provide the "look and feel" of an actual airport tower cab. Special emphasis will be given to the human-computer interfaces required for the different kinds of information displayed at the various controller and supervisory positions and to the computer-aided design (CAD) and other analytic, computer-based tools used to develop the facility.

  18. Automation of a N-S S and C Database Generation for the Harrier in Ground Effect

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Chaderjian, Neal M.; Pandya, Shishir; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A method of automating the generation of a time-dependent, Navier-Stokes static stability and control database for the Harrier aircraft in ground effect is outlined. Reusable, lightweight components arc described which allow different facets of the computational fluid dynamic simulation process to utilize a consistent interface to a remote database. These components also allow changes and customizations to easily be facilitated into the solution process to enhance performance, without relying upon third-party support. An analysis of the multi-level parallel solver OVERFLOW-MLP is presented, and the results indicate that it is feasible to utilize large numbers of processors (= 100) even with a grid system with relatively small number of cells (= 10(exp 6)). A more detailed discussion of the simulation process, as well as refined data for the scaling of the OVERFLOW-MLP flow solver will be included in the full paper.

  19. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  20. Characterisation of minimal-span plane Couette turbulence with pressure gradients

    NASA Astrophysics Data System (ADS)

    Sekimoto, Atsushi; Atkinson, Callum; Soria, Julio

    2018-04-01

    The turbulence statistics and dynamics in the spanwise-minimal plane Couette flow with pressure gradients, so-called, Couette-Poiseuille (C-P) flow, are investigated using direct numerical simulation. The large-scale motion is limited in the spanwise box dimension as in the minimal-span channel turbulence of Flores & Jiménez (Phys. Fluids, vol. 22, 2010, 071704). The effect of the top wall, where normal pressure-driven Poiseuille flow is realised, is distinguished from the events on the bottom wall, where the pressure gradient results in mild or almost-zero wall-shear stress. A proper scaling of turbulence statistics in minimal-span C-P flows is presented. Also the ‘shear-less’ wall-bounded turbulence, where the Corrsin shear parameter is very weak compared to normal wall-bounded turbulence, represents local separation, which is also observed as spanwise streaks of reversed flow in full-size plane C-P turbulence. The local separation is a multi-scale event, which grows up to the order of the channel height even in the minimal-span geometry.

  1. 75 FR 28200 - Safety Zone; Washington State Department of Transportation Ferries Division Marine Rescue...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-20

    ... (M2R) Full-Scale Exercise for a Mass Rescue Incident (MRI) AGENCY: Coast Guard, DHS. ACTION: Temporary... simulate a mass rescue incident (MRI) and will involve an abandon ship scenario with multiple response... full scale exercise which will simulate a MRI to provide training in specific emergency response...

  2. Flow field prediction in full-scale Carrousel oxidation ditch by using computational fluid dynamics.

    PubMed

    Yang, Yin; Wu, Yingying; Yang, Xiao; Zhang, Kai; Yang, Jiakuan

    2010-01-01

    In order to optimize the flow field in a full-scale Carrousel oxidation ditch with many sets of disc aerators operating simultaneously, an experimentally validated numerical tool, based on computational fluid dynamics (CFD), was proposed. A full-scale, closed-loop bioreactor (Carrousel oxidation ditch) in Ping Dingshan Sewage Treatment Plant in Ping Dingshan City, a medium-sized city in Henan Province of China, was evaluated using CFD. Moving wall model was created to simulate many sets of disc aerators which created fluid motion in the ditch. The simulated results were acceptable compared with the experimental data and the following results were obtained: (1) a new method called moving wall model could simulate the flow field in Carrousel oxidation ditch with many sets of disc aerators operating simultaneously. The whole number of cells of grids decreased significantly, thus the calculation amount decreased, and (2) CFD modeling generally characterized the flow pattern in the full-scale tank. 3D simulation could be a good supplement for improving the hydrodynamic performance in oxidation ditch designs.

  3. Finite Element Simulation of Three Full-Scale Crash Tests for Cessna 172 Aircraft

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Warren, Jerry E., Jr.

    2017-01-01

    The NASA Emergency Locator Transmitter Survivability and Reliability (ELT-SAR) project was initiated in 2013 to assess the crash performance standards for the next generation of emergency locator transmitter (ELT) systems. Three Cessna 172 aircraft were acquired to perform crash testing at NASA Langley Research Center's Landing and Impact Research Facility. Full-scale crash tests were conducted in the summer of 2015 and each test article was subjected to severe, but survivable, impact conditions including a flare-to-stall during emergency landing, and two controlled-flight-into-terrain scenarios. Full-scale finite element analyses were performed using a commercial explicit solver, ABAQUS. The first test simulated impacting a concrete surface represented analytically by a rigid plane. Tests 2 and 3 simulated impacting a dirt surface represented analytically by an Eulerian grid of brick elements using a Mohr-Coulomb material model. The objective of this paper is to summarize the test and analysis results for the three full-scale crash tests. Simulation models of the airframe which correlate well with the tests are needed for future studies of alternate ELT mounting configurations.

  4. Scale effects in wind tunnel modeling of an urban atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Kozmar, Hrvoje

    2010-03-01

    Precise urban atmospheric boundary layer (ABL) wind tunnel simulations are essential for a wide variety of atmospheric studies in built-up environments including wind loading of structures and air pollutant dispersion. One of key issues in addressing these problems is a proper choice of simulation length scale. In this study, an urban ABL was reproduced in a boundary layer wind tunnel at different scales to study possible scale effects. Two full-depth simulations and one part-depth simulation were carried out using castellated barrier wall, vortex generators, and a fetch of roughness elements. Redesigned “Counihan” vortex generators were employed in the part-depth ABL simulation. A hot-wire anemometry system was used to measure mean velocity and velocity fluctuations. Experimental results are presented as mean velocity, turbulence intensity, Reynolds stress, integral length scale of turbulence, and power spectral density of velocity fluctuations. Results suggest that variations in length-scale factor do not influence the generated ABL models when using similarity criteria applied in this study. Part-depth ABL simulation compares well with two full-depth ABL simulations indicating the truncated vortex generators developed for this study can be successfully employed in urban ABL part-depth simulations.

  5. Baseline process description for simulating plutonium oxide production for precalc project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, J. A.

    Savannah River National Laboratory (SRNL) started a multi-year project, the PreCalc Project, to develop a computational simulation of a plutonium oxide (PuO 2) production facility with the objective to study the fundamental relationships between morphological and physicochemical properties. This report provides a detailed baseline process description to be used by SRNL personnel and collaborators to facilitate the initial design and construction of the simulation. The PreCalc Project team selected the HB-Line Plutonium Finishing Facility as the basis for a nominal baseline process since the facility is operational and significant model validation data can be obtained. The process boundary as wellmore » as process and facility design details necessary for multi-scale, multi-physics models are provided.« less

  6. 3D printing of tissue-simulating phantoms as a traceable standard for biomedical optical measurement

    NASA Astrophysics Data System (ADS)

    Dong, Erbao; Wang, Minjie; Shen, Shuwei; Han, Yilin; Wu, Qiang; Xu, Ronald

    2016-01-01

    Optical phantoms are commonly used to validate and calibrate biomedical optical devices in order to ensure accurate measurement of optical properties in biological tissue. However, commonly used optical phantoms are based on homogenous materials that reflect neither optical properties nor multi-layer heterogeneities of biological tissue. Using these phantoms for optical calibration may result in significant bias in biological measurement. We propose to characterize and fabricate tissue simulating phantoms that simulate not only the multi-layer heterogeneities but also optical properties of biological tissue. The tissue characterization module detects tissue structural and functional properties in vivo. The phantom printing module generates 3D tissue structures at different scales by layer-by-layer deposition of phantom materials with different optical properties. The ultimate goal is to fabricate multi-layer tissue simulating phantoms as a traceable standard for optimal calibration of biomedical optical spectral devices.

  7. Probabilistic simulation of multi-scale composite behavior

    NASA Technical Reports Server (NTRS)

    Liaw, D. G.; Shiao, M. C.; Singhal, S. N.; Chamis, Christos C.

    1993-01-01

    A methodology is developed to computationally assess the probabilistic composite material properties at all composite scale levels due to the uncertainties in the constituent (fiber and matrix) properties and in the fabrication process variables. The methodology is computationally efficient for simulating the probability distributions of material properties. The sensitivity of the probabilistic composite material property to each random variable is determined. This information can be used to reduce undesirable uncertainties in material properties at the macro scale of the composite by reducing the uncertainties in the most influential random variables at the micro scale. This methodology was implemented into the computer code PICAN (Probabilistic Integrated Composite ANalyzer). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in the material properties of a typical laminate and comparing the results with the Monte Carlo simulation method. The experimental data of composite material properties at all scales fall within the scatters predicted by PICAN.

  8. Urology Residents' Experience and Attitude Toward Surgical Simulation: Presenting our 4-Year Experience With a Multi-institutional, Multi-modality Simulation Model.

    PubMed

    Chow, Alexander K; Sherer, Benjamin A; Yura, Emily; Kielb, Stephanie; Kocjancic, Ervin; Eggener, Scott; Turk, Thomas; Park, Sangtae; Psutka, Sarah; Abern, Michael; Latchamsetty, Kalyan C; Coogan, Christopher L

    2017-11-01

    To evaluate the Urological resident's attitude and experience with surgical simulation in residency education using a multi-institutional, multi-modality model. Residents from 6 area urology training programs rotated through simulation stations in 4 consecutive sessions from 2014 to 2017. Workshops included GreenLight photovaporization of the prostate, ureteroscopic stone extraction, laparoscopic peg transfer, 3-dimensional laparoscopy rope pass, transobturator sling placement, intravesical injection, high definition video system trainer, vasectomy, and Urolift. Faculty members provided teaching assistance, objective scoring, and verbal feedback. Participants completed a nonvalidated questionnaire evaluating utility of the workshop and soliciting suggestions for improvement. Sixty-three of 75 participants (84%) (postgraduate years 1-6) completed the exit questionnaire. Median rating of exercise usefulness on a scale of 1-10 ranged from 7.5 to 9. On a scale of 0-10, cumulative median scores of the course remained high over 4 years: time limit per station (9; interquartile range [IQR] 2), faculty instruction (9, IQR 2), ease of use (9, IQR 2), face validity (8, IQR 3), and overall course (9, IQR 2). On multivariate analysis, there was no difference in rating of domains between postgraduate years. Sixty-seven percent (42/63) believe that simulation training should be a requirement of Urology residency. Ninety-seven percent (63/65) viewed the laboratory as beneficial to their education. This workshop model is a valuable training experience for residents. Most participants believe that surgical simulation is beneficial and should be a requirement for Urology residency. High ratings of usefulness for each exercise demonstrated excellent face validity provided by the course. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Turbulence sources, character, and effects in the stable boundary layer: Insights from multi-scale direct numerical simulations and new, high-resolution measurements

    NASA Astrophysics Data System (ADS)

    Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale

    2013-04-01

    A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.

  10. Multi-view L2-SVM and its multi-view core vector machine.

    PubMed

    Huang, Chengquan; Chung, Fu-lai; Wang, Shitong

    2016-03-01

    In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Carbon storage, timber production, and biodiversity: comparing ecosystem services with multi-criteria decision analysis

    USGS Publications Warehouse

    Schwenk, W. Scott; Donovan, Therese; Keeton, William S.; Nunery, Jared S.

    2012-01-01

    Increasingly, land managers seek ways to manage forests for multiple ecosystem services and functions, yet considerable challenges exist in comparing disparate services and balancing trade-offs among them. We applied multi-criteria decision analysis (MCDA) and forest simulation models to simultaneously consider three objectives: (1) storing carbon, (2) producing timber and wood products, and (3) sustaining biodiversity. We used the Forest Vegetation Simulator (FVS) applied to 42 northern hardwood sites to simulate forest development over 100 years and to estimate carbon storage and timber production. We estimated biodiversity implications with occupancy models for 51 terrestrial bird species that were linked to FVS outputs. We simulated four alternative management prescriptions that spanned a range of harvesting intensities and forest structure retention. We found that silvicultural approaches emphasizing less frequent harvesting and greater structural retention could be expected to achieve the greatest net carbon storage but also produce less timber. More intensive prescriptions would enhance biodiversity because positive responses of early successional species exceeded negative responses of late successional species within the heavily forested study area. The combinations of weights assigned to objectives had a large influence on which prescriptions were scored as optimal. Overall, we found that a diversity of silvicultural approaches is likely to be preferable to any single approach, emphasizing the need for landscape-scale management to provide a full range of ecosystem goods and services. Our analytical framework that combined MCDA with forest simulation modeling was a powerful tool in understanding trade-offs among management objectives and how they can be simultaneously accommodated.

  12. A Multi-Scale, Multi-Physics Optimization Framework for Additively Manufactured Structural Components

    NASA Astrophysics Data System (ADS)

    El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel

    This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.

  13. Multi-scale computation methods: Their applications in lithium-ion battery research and development

    NASA Astrophysics Data System (ADS)

    Siqi, Shi; Jian, Gao; Yue, Liu; Yan, Zhao; Qu, Wu; Wangwei, Ju; Chuying, Ouyang; Ruijuan, Xiao

    2016-01-01

    Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. Project supported by the National Natural Science Foundation of China (Grant Nos. 51372228 and 11234013), the National High Technology Research and Development Program of China (Grant No. 2015AA034201), and Shanghai Pujiang Program, China (Grant No. 14PJ1403900).

  14. Predicting the breakdown strength and lifetime of nanocomposites using a multi-scale modeling approach

    NASA Astrophysics Data System (ADS)

    Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.

    2017-08-01

    It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.

  15. The trend of the multi-scale temporal variability of precipitation in Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Yu, Z.

    2011-12-01

    Hydrological problems like estimation of flood and drought frequencies under future climate change are not well addressed as a result of the disability of current climate models to provide reliable prediction (especially for precipitation) shorter than 1 month. In order to assess the possible impacts that multi-scale temporal distribution of precipitation may have on the hydrological processes in Colorado River Basin (CRB), a comparative analysis of multi-scale temporal variability of precipitation as well as the trend of extreme precipitation is conducted in four regions controlled by different climate systems. Multi-scale precipitation variability including within-storm patterns and intra-annual, inter-annual and decadal variabilities will be analyzed to explore the possible trends of storm durations, inter-storm periods, average storm precipitation intensities and extremes under both long-term natural climate variability and human-induced warming. Further more, we will examine the ability of current climate models to simulate the multi-scale temporal variability and extremes of precipitation. On the basis of these analyses, a statistical downscaling method will be developed to disaggregate the future precipitation scenarios which will provide a more reliable and finer temporal scale precipitation time series for hydrological modeling. Analysis results and downscaling results will be presented.

  16. Multi-Scale Modeling, Surrogate-Based Analysis, and Optimization of Lithium-Ion Batteries for Vehicle Applications

    NASA Astrophysics Data System (ADS)

    Du, Wenbo

    A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.

  17. MUSIC: MUlti-Scale Initial Conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  18. Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm

    NASA Astrophysics Data System (ADS)

    Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun

    2017-10-01

    A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.

  19. TVB-EduPack—An Interactive Learning and Scripting Platform for The Virtual Brain

    PubMed Central

    Matzke, Henrik; Schirner, Michael; Vollbrecht, Daniel; Rothmeier, Simon; Llarena, Adalberto; Rojas, Raúl; Triebkorn, Paul; Domide, Lia; Mersmann, Jochen; Solodkin, Ana; Jirsa, Viktor K.; McIntosh, Anthony Randal; Ritter, Petra

    2015-01-01

    The Virtual Brain (TVB; thevirtualbrain.org) is a neuroinformatics platform for full brain network simulation based on individual anatomical connectivity data. The framework addresses clinical and neuroscientific questions by simulating multi-scale neural dynamics that range from local population activity to large-scale brain function and related macroscopic signals like electroencephalography and functional magnetic resonance imaging. TVB is equipped with a graphical and a command-line interface to create models that capture the characteristic biological variability to predict the brain activity of individual subjects. To enable researchers from various backgrounds a quick start into TVB and brain network modeling in general, we developed an educational module: TVB-EduPack. EduPack offers two educational functionalities that seamlessly integrate into TVB's graphical user interface (GUI): (i) interactive tutorials introduce GUI elements, guide through the basic mechanics of software usage and develop complex use-case scenarios; animations, videos and textual descriptions transport essential principles of computational neuroscience and brain modeling; (ii) an automatic script generator records model parameters and produces input files for TVB's Python programming interface; thereby, simulation configurations can be exported as scripts that allow flexible customization of the modeling process and self-defined batch- and post-processing applications while benefitting from the full power of the Python language and its toolboxes. This article covers the implementation of TVB-EduPack and its integration into TVB architecture. Like TVB, EduPack is an open source community project that lives from the participation and contribution of its users. TVB-EduPack can be obtained as part of TVB from thevirtualbrain.org. PMID:26635597

  20. Simulating New Drop Test Vehicles and Test Techniques for the Orion CEV Parachute Assembly System

    NASA Technical Reports Server (NTRS)

    Morris, Aaron L.; Fraire, Usbaldo, Jr.; Bledsoe, Kristin J.; Ray, Eric; Moore, Jim W.; Olson, Leah M.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is engaged in a multi-year design and test campaign to qualify a parachute recovery system for human use on the Orion Spacecraft. Test and simulation techniques have evolved concurrently to keep up with the demands of a challenging and complex system. The primary simulations used for preflight predictions and post-test data reconstructions are Decelerator System Simulation (DSS), Decelerator System Simulation Application (DSSA), and Drop Test Vehicle Simulation (DTV-SIM). The goal of this paper is to provide a roadmap to future programs on the test technique challenges and obstacles involved in executing a large-scale, multi-year parachute test program. A focus on flight simulation modeling and correlation to test techniques executed to obtain parachute performance parameters are presented.

  1. Large-scale and Long-duration Simulation of a Multi-stage Eruptive Solar Event

    NASA Astrophysics Data System (ADS)

    Jiang, chaowei; Hu, Qiang; Wu, S. T.

    2015-04-01

    We employ a data-driven 3D MHD active region evolution model by using the Conservation Element and Solution Element (CESE) numerical method. This newly developed model retains the full MHD effects, allowing time-dependent boundary conditions and time evolution studies. The time-dependent simulation is driven by measured vector magnetograms and the method of MHD characteristics on the bottom boundary. We have applied the model to investigate the coronal magnetic field evolution of AR11283 which was characterized by a pre-existing sigmoid structure in the core region and multiple eruptions, both in relatively small and large scales. We have succeeded in producing the core magnetic field structure and the subsequent eruptions of flux-rope structures (see https://dl.dropboxusercontent.com/u/96898685/large.mp4 for an animation) as the measured vector magnetograms on the bottom boundary evolve in time with constant flux emergence. The whole process, lasting for about an hour in real time, compares well with the corresponding SDO/AIA and coronagraph imaging observations. From these results, we show the capability of the model, largely data-driven, that is able to simulate complex, topological, and highly dynamic active region evolutions. (We acknowledge partial support of NSF grants AGS 1153323 and AGS 1062050, and data support from SDO/HMI and AIA teams).

  2. Resonance phenomena in a time-dependent, three-dimensional model of an idealized eddy

    NASA Astrophysics Data System (ADS)

    Rypina, I. I.; Pratt, L. J.; Wang, P.; Äe; -zgökmen, T. M.; Mezic, I.

    2015-08-01

    We analyze the geometry of Lagrangian motion and material barriers in a time-dependent, three-dimensional, Ekman-driven, rotating cylinder flow, which serves as an idealization for an isolated oceanic eddy and other overturning cells with cylindrical geometry in the ocean and atmosphere. The flow is forced at the top through an oscillating upper lid, and the response depends on the frequency and amplitude of lid oscillations. In particular, the Lagrangian geometry changes near the resonant tori of the unforced flow, whose frequencies are rationally related to the forcing frequencies. Multi-scale analytical expansions are used to simplify the flow in the vicinity of resonant trajectories and to investigate the resonant flow geometries. The resonance condition and scaling can be motivated by simple physical argument. The theoretically predicted flow geometries near resonant trajectories have then been confirmed through numerical simulations in a phenomenological model and in a full solution of the Navier-Stokes equations.

  3. Three-dimensional Dendritic Needle Network model with application to Al-Cu directional solidification experiments

    DOE PAGES

    Tourret, D.; Karma, A.; Clarke, A. J.; ...

    2015-06-11

    We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulationsmore » and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.« less

  4. In-vehicle group activity modeling and simulation in sensor-based virtual environment

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.

  5. Properties of Shocked Polymers: Mbar experiments on Z and multi-scale simulations

    NASA Astrophysics Data System (ADS)

    Mattsson, Thomas R.

    2010-03-01

    Significant progress has been made over the last few years in understanding properties of matter subject to strong shocks and other extreme conditions. High-accuracy multi-Mbar experiments and first-principles theoretical studies together provide detailed insights into the physics and chemistry of high energy-density matter. While comprehensive advances have been made for pure elements like deuterium, helium, and carbon, progress has been slower for equally important, albeit more challenging, materials like molecular crystals, polymers, and foams. Hydrocarbon based polymer foams are common materials and in particular they are used in designing shock- and inertial confinement fusion experiments. Depending on their initial density, foams shock to relatively higher pressure and temperature compared to shocked dense polymers/plastics. As foams and polymers are shocked, they exhibit both structural and chemical transitions. We will present experimental and theoretical results for shocked polymers in the Mbar regime. By shock impact of magnetically launched flyer plates on poly(4-methyl-1-pentene) foams, we create multi-Mbar pressures in a dense plasma mixture of hydrogen, carbon, at temperatures of several eV. Concurrently with executing experiments, we analyze the system by multi-scale simulations, from density functional theory to continuum magneto-hydrodynamics simulations. In particular, density functional theory (DFT) molecular dynamics (MD) and classical MD simulations of the principal shock Hugoniot will be presented in detail for two hydrocarbon polymers: polyethylene (PE) and poly(4-methyl-1-pentene) (PMP).

  6. An approach to value-based simulator selection: The creation and evaluation of the simulator value index tool.

    PubMed

    Rooney, Deborah M; Hananel, David M; Covington, Benjamin J; Dionise, Patrick L; Nykamp, Michael T; Pederson, Melvin; Sahloul, Jamal M; Vasquez, Rachael; Seagull, F Jacob; Pinsky, Harold M; Sweier, Domenica G; Cooke, James M

    2018-04-01

    Currently there is no reliable, standardized mechanism to support health care professionals during the evaluation of and procurement processes for simulators. A tool founded on best practices could facilitate simulator purchase processes. In a 3-phase process, we identified top factors considered during the simulator purchase process through expert consensus (n = 127), created the Simulator Value Index (SVI) tool, evaluated targeted validity evidence, and evaluated the practical value of this SVI. A web-based survey was sent to simulation professionals. Participants (n = 79) used the SVI and provided feedback. We evaluated the practical value of 4 tool variations by calculating their sensitivity to predict a preferred simulator. Seventeen top factors were identified and ranked. The top 2 were technical stability/reliability of the simulator and customer service, with no practical differences in rank across institution or stakeholder role. Full SVI variations predicted successfully the preferred simulator with good (87%) sensitivity, whereas the sensitivity of variations in cost and customer service and cost and technical stability decreased (≤54%). The majority (73%) of participants agreed that the SVI was helpful at guiding simulator purchase decisions, and 88% agreed the SVI tool would help facilitate discussion with peers and leadership. Our findings indicate the SVI supports the process of simulator purchase using a standardized framework. Sensitivity of the tool improved when factors extend beyond traditionally targeted factors. We propose the tool will facilitate discussion amongst simulation professionals dealing with simulation, provide essential information for finance and procurement professionals, and improve the long-term value of simulation solutions. Limitations and application of the tool are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Multi-phase CFD modeling of solid sorbent carbon capture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, E. M.; DeCroix, D.; Breault, R.

    2013-07-01

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  8. Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; DeCroix, David; Breault, Ronald W.

    2013-07-30

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  9. The Interior Structure, Dynamics, and Heliospheric Impact of Reconnection-Driven Solar Coronal Hole Jets

    NASA Astrophysics Data System (ADS)

    Roberts, Merrill Alan

    From bright loop structures and polar plumes to solar flares and coronal mass ejections (CMEs), our Sun has shown itself to be a highly dynamic star over a multitude of spatial and temporal scales. In fact, as the resolutions of our observations have improved, it has become clear that even coronal holes, the Sun's so called dark and quiet regions, are full of activity. Coronal hole (CH) jets are one example of this activity, a solar transient that occurs ubiquitously in coronal hole regions and which may contribute significant mass and energy to the corona and the solar wind. CH jets have been shown to share many properties with their larger and more energetic cousins, flares and CMEs, thereby providing an opportunity to understand these more complex and infrequent solar features. CH jets may also provide a source for microstreams and torsional Alfven waves found in the solar wind and interplanetary medium, as well as insight into basic processes for driving the fast solar wind and heating the corona. The purpose of this work is to deepen our understanding of CH jets by examining state-of-the-art fully 3D MHD simulations of CH jet eruptions. First, we investigate the internal structure and turbulent flows inside a model CH jet through an analysis of the simulation described by Karpen et al. (2017). An analysis of the radial variability within the simulated jet is performed, as well as a multi-scale turbulence analysis. We confirm the occurrence of multi-scale MHD turbulence within the model jet, and show that the resulting jet wake can be divided into three radially stratified regions based on its internal structure. Second, the 3D model space is extended to 60 solar radii and simulated encounters of the soon-to-be-launched Parker Solar Probe (PSP, Fox et al., 2016) mission with our model jet are produced and analyzed in order to identify signatures that may be seen in the eventual PSP observations. Our results suggest that PSP should encounter CH jets in situ, and that each of the three jet regions found have unique, identifiable signatures that could be detected by PSP. These findings suggest that CH jets are internally complex, with multi-scale, radially stratified internal structure which evolves as the jet progresses through the heliosphere. PSP will have a unique opportunity to observe this newly predicted and previously unobserved fine structure when it descends into the corona in the 2020s, and our results will serve to interpret the PSP data, as well as provide a means to test the validity of our model by comparison with them.

  10. Multi-Scale Modeling of a Graphite-Epoxy-Nanotube System

    NASA Technical Reports Server (NTRS)

    Frankland, S. J. V.; Riddick, J. C.; Gates, T. S.

    2005-01-01

    A multi-scale method is utilized to determine some of the constitutive properties of a three component graphite-epoxy-nanotube system. This system is of interest because carbon nanotubes have been proposed as stiffening and toughening agents in the interlaminar regions of carbon fiber/epoxy laminates. The multi-scale method uses molecular dynamics simulation and equivalent-continuum modeling to compute three of the elastic constants of the graphite-epoxy-nanotube system: C11, C22, and C33. The 1-direction is along the nanotube axis, and the graphene sheets lie in the 1-2 plane. It was found that the C11 is only 4% larger than the C22. The nanotube therefore does have a small, but positive effect on the constitutive properties in the interlaminar region.

  11. Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping

    2018-01-01

    Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.

  12. EPA RESEARCH HIGHLIGHTS -- MODELS-3/CMAQ OFFERS COMPREHENSIVE APPROACH TO AIR QUALITY MODELING

    EPA Science Inventory

    Regional and global coordinated efforts are needed to address air quality problems that are growing in complexity and scope. Models-3 CMAQ contains a community multi-scale air quality modeling system for simulating urban to regional scale pollution problems relating to troposphe...

  13. Quantification of pulmonary vessel diameter in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate

    2015-03-01

    Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.

  14. Crack Growth Simulation and Residual Strength Prediction in Airplane Fuselages

    NASA Technical Reports Server (NTRS)

    Chen, Chuin-Shan; Wawrzynek, Paul A.; Ingraffea, Anthony R.

    1999-01-01

    This is the final report for the NASA funded project entitled "Crack Growth Prediction Methodology for Multi-Site Damage." The primary objective of the project was to create a capability to simulate curvilinear fatigue crack growth and ductile tearing in aircraft fuselages subjected to widespread fatigue damage. The second objective was to validate the capability by way of comparisons to experimental results. Both objectives have been achieved and the results are detailed herein. In the first part of the report, the crack tip opening angle (CTOA) fracture criterion, obtained and correlated from coupon tests to predict fracture behavior and residual strength of built-up aircraft fuselages, is discussed. Geometrically nonlinear, elastic-plastic, thin shell finite element analyses are used to simulate stable crack growth and to predict residual strength. Both measured and predicted results of laboratory flat panel tests and full-scale fuselage panel tests show substantial reduction of residual strength due to the occurrence of multi-site damage (MSD). Detailed comparisons of n stable crack growth history, and residual strength between the predicted and experimental results are used to assess the validity of the analysis methodology. In the second part of the report, issues related to crack trajectory prediction in thin shells; an evolving methodology uses the crack turning phenomenon to improve the structural integrity of aircraft structures are discussed, A directional criterion is developed based on the maximum tangential stress theory, but taking into account the effect of T-stress and fracture toughness orthotropy. Possible extensions of the current crack growth directional criterion to handle geometrically and materially nonlinear problems are discussed. The path independent contour integral method for T-stress evaluation is derived and its accuracy is assessed using a p- and hp-version adaptive finite element method. Curvilinear crack growth is simulated in coupon tests and in full-scale fuselage panel tests. Both T-stress and fracture toughness orthotropy are found to be essential to predict the observed crack paths. The analysis methodology and software program (FRANC3D/STAGS) developed herein allows engineers to maintain aging aircraft economically while insuring continuous airworthiness. Consequently, it will improve the technology to support the safe operation of the current aircraft fleet as well as the design of more damage-tolerant aircraft for the next generation fleet.

  15. SENSITIVITY OF THE CMAQ MERCURY MODEL TO GAS-PHASE OXIDATION CHEMISTRY

    EPA Science Inventory

    Simulations of the Community Multi-scale Air Quality (CMAQ) model for mercury have shown the vast majority of the mercury deposited in the United States to be in the form of oxidized mercury. However, most of this simulated oxidized mercury was the result of atmospheric oxidatio...

  16. Toward Fidelity: Simulation-Based Learning for School Principal Preparation and Professional Development

    ERIC Educational Resources Information Center

    Shakeshaft, Charol; Becker, Jonathan; Mann, Dale; Reardon, Martin; Robinson, Kerry

    2013-01-01

    The authors describe a simulation-based set of full-motion video scenarios which require students studying educational leadership to make decisions that solve problems presented in "A Year in the Life of a Middle School Principal." The decision points were designed to reflect the proficiencies emphasized in ISLLC [Interstate School…

  17. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  18. EPOS-WP16: A coherent and collaborative network of Solid Earth Multi-scale laboratories

    NASA Astrophysics Data System (ADS)

    Calignano, Elisa; Rosenau, Matthias; Lange, Otto; Spiers, Chris; Willingshofer, Ernst; Drury, Martyn; van Kan-Parker, Mirjam; Elger, Kirsten; Ulbricht, Damian; Funiciello, Francesca; Trippanera, Daniele; Sagnotti, Leonardo; Scarlato, Piergiorgio; Tesei, Telemaco; Winkler, Aldo

    2017-04-01

    Laboratory facilities are an integral part of Earth Science research. The diversity of methods employed in such infrastructures reflects the multi-scale nature of the Earth system and is essential for the understanding of its evolution, for the assessment of geo-hazards and for the sustainable exploitation of geo-resources. In the frame of EPOS (European Plate Observing System), the Working Package 16 represents a developing community of European Geoscience Multi-scale laboratories. The participant and collaborating institutions (Utrecht University, GFZ, RomaTre University, INGV, NERC, CSIC-ICTJA, CNRS, LMU, C4G-UBI, ETH, CNR*) embody several types of laboratory infrastructures, engaged in different fields of interest of Earth Science: from high temperature and pressure experimental facilities, to electron microscopy, micro-beam analysis, analogue tectonic and geodynamic modelling and paleomagnetic laboratories. The length scales encompassed by these infrastructures range from the nano- and micrometre levels (electron microscopy and micro-beam analysis) to the scale of experiments on centimetres-sized samples, and to analogue model experiments simulating the reservoir scale, the basin scale and the plate scale. The aim of WP16 is to provide two services by the year 2019: first, providing virtual access to data from laboratories (data service) and, second, providing physical access to laboratories (transnational access, TNA). Regarding the development of a data service, the current status is such that most data produced by the various laboratory centres and networks are available only in limited "final form" in publications, many data remain inaccessible and/or poorly preserved. Within EPOS the TCS Multi-scale laboratories is collecting and harmonizing available and emerging laboratory data on the properties and process controlling rock system behaviour at all relevant scales, in order to generate products accessible and interoperable through services for supporting research activities into Geo-resources and Geo-storage, Geo-hazards and Earth System Evolution. Regarding the provision of physical access to laboratories the current situation is such that access to WP16's laboratories is often based on professional relations, available budgets, shared interests and other constraints. In WP16 we aim at reducing the present diversity and non-transparency of access rules and replace ad-hoc procedures for access by a streamlined mechanisms, objective rules and a transparent policy. We work on procedures and mechanisms regulating application, negotiation, evaluation, feedback, selection, admission, approval, feasibility check, setting-up, use, monitoring and dismantling. In the end laboratories should each have a single point providing clear and transparent information on the facility itself, its services, access policy, data management policy and the legal terms and conditions for use of equipment. Through its role as an intermediary and information broker, EPOS will acquire a wealth of information from Research Infrastructures and users on the establishment of efficient collaboration agreements.

  19. Computer Laboratory for Multi-scale Simulations of Novel Nanomaterials

    DTIC Science & Technology

    2014-09-15

    schemes for multiscale modeling of polymers. Permselective ion-exchange membranes for protective clothing, fuel cells , and batteries are of special...polyelectrolyte membranes ( PEM ) with chemical warfare agents (CWA) and their simulants and (2) development of new simulation methods and computational...chemical potential using gauge cell method and calculation of density profiles. However, the code does not run in parallel environments. For mesoscale

  20. Capturing readiness to learn and collaboration as explored with an interprofessional simulation scenario: A mixed-methods research study.

    PubMed

    Rossler, Kelly L; Kimble, Laura P

    2016-01-01

    Didactic lecture does not lend itself to teaching interprofessional collaboration. High-fidelity human patient simulation with a focus on clinical situations/scenarios is highly conducive to interprofessional education. Consequently, a need for research supporting the incorporation of interprofessional education with high-fidelity patient simulation based technology exists. The purpose of this study was to explore readiness for interprofessional learning and collaboration among pre-licensure health professions students participating in an interprofessional education human patient simulation experience. Using a mixed methods convergent parallel design, a sample of 53 pre-licensure health professions students enrolled in nursing, respiratory therapy, health administration, and physical therapy programs within a college of health professions participated in high-fidelity human patient simulation experiences. Perceptions of interprofessional learning and collaboration were measured with the revised Readiness for Interprofessional Learning Scale (RIPLS) and the Health Professional Collaboration Scale (HPCS). Focus groups were conducted during the simulation post-briefing to obtain qualitative data. Statistical analysis included non-parametric, inferential statistics. Qualitative data were analyzed using a phenomenological approach. Pre- and post-RIPLS demonstrated pre-licensure health professions students reported significantly more positive attitudes about readiness for interprofessional learning post-simulation in the areas of team work and collaboration, negative professional identity, and positive professional identity. Post-simulation HPCS revealed pre-licensure nursing and health administration groups reported greater health collaboration during simulation than physical therapy students. Qualitative analysis yielded three themes: "exposure to experiential learning," "acquisition of interactional relationships," and "presence of chronology in role preparation." Quantitative and qualitative data converged around the finding that physical therapy students had less positive perceptions of the experience because they viewed physical therapy practice as occurring one-on-one rather than in groups. Findings support that pre-licensure students are ready to engage in interprofessional education through exposure to an experiential format such as high-fidelity human patient simulation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Development of an Image-based Multi-Scale Finite Element Approach to Predict Fatigue Damage in Asphalt Mixtures

    NASA Astrophysics Data System (ADS)

    Arshadi, Amir

    Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.

  2. Tackling some of the most intricate geophysical challenges via high-performance computing

    NASA Astrophysics Data System (ADS)

    Khosronejad, A.

    2016-12-01

    Recently, world has been witnessing significant enhancements in computing power of supercomputers. Computer clusters in conjunction with the advanced mathematical algorithms has set the stage for developing and applying powerful numerical tools to tackle some of the most intricate geophysical challenges that today`s engineers face. One such challenge is to understand how turbulent flows, in real-world settings, interact with (a) rigid and/or mobile complex bed bathymetry of waterways and sea-beds in the coastal areas; (b) objects with complex geometry that are fully or partially immersed; and (c) free-surface of waterways and water surface waves in the coastal area. This understanding is especially important because the turbulent flows in real-world environments are often bounded by geometrically complex boundaries, which dynamically deform and give rise to multi-scale and multi-physics transport phenomena, and characterized by multi-lateral interactions among various phases (e.g. air/water/sediment phases). Herein, I present some of the multi-scale and multi-physics geophysical fluid mechanics processes that I have attempted to study using an in-house high-performance computational model, the so-called VFS-Geophysics. More specifically, I will present the simulation results of turbulence/sediment/solute/turbine interactions in real-world settings. Parts of the simulations I present are performed to gain scientific insights into the processes such as sand wave formation (A. Khosronejad, and F. Sotiropoulos, (2014), Numerical simulation of sand waves in a turbulent open channel flow, Journal of Fluid Mechanics, 753:150-216), while others are carried out to predict the effects of climate change and large flood events on societal infrastructures ( A. Khosronejad, et al., (2016), Large eddy simulation of turbulence and solute transport in a forested headwater stream, Journal of Geophysical Research:, doi: 10.1002/2014JF003423).

  3. Multi-scale approach to the modeling of fission gas discharge during hypothetical loss-of-flow accident in gen-IV sodium fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behafarid, F.; Shaver, D. R.; Bolotnov, I. A.

    The required technological and safety standards for future Gen IV Reactors can only be achieved if advanced simulation capabilities become available, which combine high performance computing with the necessary level of modeling detail and high accuracy of predictions. The purpose of this paper is to present new results of multi-scale three-dimensional (3D) simulations of the inter-related phenomena, which occur as a result of fuel element heat-up and cladding failure, including the injection of a jet of gaseous fission products into a partially blocked Sodium Fast Reactor (SFR) coolant channel, and gas/molten sodium transport along the coolant channels. The computational approachmore » to the analysis of the overall accident scenario is based on using two different inter-communicating computational multiphase fluid dynamics (CMFD) codes: a CFD code, PHASTA, and a RANS code, NPHASE-CMFD. Using the geometry and time history of cladding failure and the gas injection rate, direct numerical simulations (DNS), combined with the Level Set method, of two-phase turbulent flow have been performed by the PHASTA code. The model allows one to track the evolution of gas/liquid interfaces at a centimeter scale. The simulated phenomena include the formation and breakup of the jet of fission products injected into the liquid sodium coolant. The PHASTA outflow has been averaged over time to obtain mean phasic velocities and volumetric concentrations, as well as the liquid turbulent kinetic energy and turbulence dissipation rate, all of which have served as the input to the core-scale simulations using the NPHASE-CMFD code. A sliding window time averaging has been used to capture mean flow parameters for transient cases. The results presented in the paper include testing and validation of the proposed models, as well the predictions of fission-gas/liquid-sodium transport along a multi-rod fuel assembly of SFR during a partial loss-of-flow accident. (authors)« less

  4. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  5. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  6. Observational Signatures of Coronal Heating

    NASA Astrophysics Data System (ADS)

    Dahlburg, R. B.; Einaudi, G.; Ugarte-Urra, I.; Warren, H. P.; Rappazzo, A. F.; Velli, M.; Taylor, B.

    2016-12-01

    Recent research on observational signatures of turbulent heating of a coronal loop will be discussed. The evolution of the loop is is studied by means of numericalsimulations of the fully compressible three-dimensionalmagnetohydrodynamic equations using the HYPERION code. HYPERION calculates the full energy cycle involving footpoint convection, magnetic reconnection,nonlinear thermal conduction and optically thin radiation.The footpoints of the loop magnetic field are convected by random photospheric motions. As a consequence the magnetic field in the loop is energized and develops turbulent nonlinear dynamics characterized by the continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Dissipation is non-uniformly distributed so that only a fraction of thecoronal mass and volume gets heated at any time. Temperature and density are highly structured at scales which, in the solar corona, remain observationally unresolved: the plasma of the simulated loop is multi-thermal, where highly dynamical hotter and cooler plasma strands arescattered throughout the loop at sub-observational scales. Typical simulated coronal loops are 50000 km length and have axial magnetic field intensities ranging from 0.01 to 0.04 Tesla.To connect these simulations to observations the computed numberdensities and temperatures are used to synthesize the intensities expected inemission lines typically observed with the Extreme ultraviolet Imaging Spectrometer(EIS) on Hinode. These intensities are then employed to compute differentialemission measure distributions, which are found to be very similar to those derivedfrom observations of solar active regions.

  7. Fatigue analysis and testing of wind turbine blades

    NASA Astrophysics Data System (ADS)

    Greaves, Peter Robert

    This thesis focuses on fatigue analysis and testing of large, multi MW wind turbine blades. The blades are one of the most expensive components of a wind turbine, and their mass has cost implications for the hub, nacelle, tower and foundations of the turbine so it is important that they are not unnecessarily strong. Fatigue is often an important design driver, but fatigue of composites is poorly understood and so large safety factors are often applied to the loads. This has implications for the weight of the blade. Full scale fatigue testing of blades is required by the design standards, and provides manufacturers with confidence that the blade will be able to survive its service life. This testing is usually performed by resonating the blade in the flapwise and edgewise directions separately, but in service these two loads occur at the same time.. A fatigue testing method developed at Narec (the National Renewable Energy Centre) in the UK in which the flapwise and edgewise directions are excited simultaneously has been evaluated by comparing the Palmgren-Miner damage sum around the blade cross section after testing with the damage distribution caused by the service life. A method to obtain the resonant test configuration that will result in the optimum mode shapes for the flapwise and edgewise directions was then developed, and simulation software was designed to allow the blade test to be simulated so that realistic comparisons between the damage distributions after different test types could be obtained. During the course of this work the shortcomings with conventional fatigue analysis methods became apparent, and a novel method of fatigue analysis based on multi-continuum theory and the kinetic theory of fracture was developed. This method was benchmarked using physical test data from the OPTIDAT database and was applied to the analysis of a complete blade. A full scale fatigue test method based on this new analysis approach is also discussed..

  8. Quantifying uncertainty and computational complexity for pore-scale simulations

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.

    2016-12-01

    Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lin; Dai, Zhenxue; Gong, Huili

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  10. Simulation-Based Airframe Noise Prediction of a Full-Scale, Full Aircraft

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Fares, Ehab

    2016-01-01

    A previously validated computational approach applied to an 18%-scale, semi-span Gulfstream aircraft model was extended to the full-scale, full-span aircraft in the present investigation. The full-scale flap and main landing gear geometries used in the simulations are nearly identical to those flown on the actual aircraft. The lattice Boltzmann solver PowerFLOW® was used to perform time-accurate predictions of the flow field associated with this aircraft. The simulations were performed at a Mach number of 0.2 with the flap deflected 39 deg. and main landing gear deployed (landing configuration). Special attention was paid to the accurate prediction of major sources of flap tip and main landing gear noise. Computed farfield noise spectra for three selected baseline configurations (flap deflected 39 deg. with and without main gear extended, and flap deflected 0 deg. with gear deployed) are presented. The flap brackets are shown to be important contributors to the farfield noise spectra in the mid- to high-frequency range. Simulated farfield noise spectra for the baseline configurations, obtained using a Ffowcs Williams and Hawkings acoustic analogy approach, were found to be in close agreement with acoustic measurements acquired during the 2006 NASA-Gulfstream joint flight test of the same aircraft.

  11. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Fu, Yao; Song, Jeong-Hoon

    2014-08-01

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.

  12. RANS Simulation (Virtual Blade Model [VBM]) of Single Full Scale DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Aliseda, Alberto

    2013-04-10

    Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for Reynolds Averaged Navier-Stokes (RANS) simulation of a single full scale DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. In this case study the flow field around and in the wake of the full scale DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device.

  13. Multi-wavelength and multiband RE-doped optical fiber source array for WDM-GPON applications

    NASA Astrophysics Data System (ADS)

    Perez-Sanchez, G. G.; Bertoldi-Martins, I.; Gallion, P.; Gosset, C.; Álvarez-Chávez, J. A.

    2013-12-01

    In this paper, a multiband, multi-wavelength, all-fibre source array consisting of an 810nm pump laser diode, thretwo fiber splitters and three segments of Er-, Tm- and Nd-doped fiber is proposed for PON applications. In the set-up, cascaded pairs of standard fiber gratings are used for extracting the required multiple wavelengths within their corresponding bands. A thorough design parameter description, optical array details and full simulation results, such as: full multi-wavelength spectrum, peak and average powers for each generated wavelength, linewidth at FWHM for each generated signal, and individual and overall conversion efficiency, will be included in the manuscript.

  14. Using multi-scale entropy and principal component analysis to monitor gears degradation via the motor current signature analysis

    NASA Astrophysics Data System (ADS)

    Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir

    2017-06-01

    This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.

  15. Manned remote work station development article

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The two prime objectives of the Manned Remote Work Station (MRWS) Development Article Study are to first, evaluate the MRWS flight article roles and associated design concepts for fundamental requirements and embody key technology developments into a simulation program; and to provide detail manufacturing drawings and schedules for a simulator development test article. An approach is outlined which establishes flight article requirements based on past studies of Solar Power Satellite, orbital construction support equipments, construction bases and near term shuttle operations. Simulation objectives are established for those technology issues that can best be addressed on a simulator. Concepts for full-scale and sub-scale simulators are then studied to establish an overall approach to studying MRWS requirements. Emphasis then shifts to design and specification of a full-scale development test article.

  16. Terascale High-Fidelity Simulations of Turbulent Combustion with Detailed Chemistry: Spray Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rutland, Christopher J.

    2009-04-26

    The Terascale High-Fidelity Simulations of Turbulent Combustion (TSTC) project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of the approach is direct numerical simulation (DNS) featuring the highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. Under this component of the TSTC program the simulation code named S3D, developed and shared with coworkers at Sandia National Laboratories, has been enhanced with newmore » numerical algorithms and physical models to provide predictive capabilities for turbulent liquid fuel spray dynamics. Major accomplishments include improved fundamental understanding of mixing and auto-ignition in multi-phase turbulent reactant mixtures and turbulent fuel injection spray jets.« less

  17. Cost-consequence analysis of multimodal interventions with environmental components for pediatric asthma in the state of Maryland.

    PubMed

    Jassal, Mandeep S; Diette, Gregory B; Dowdy, David W

    2013-08-01

    Applied environmental strategies for asthma control are often expensive, but may save longer-term healthcare costs. Whether these savings outweigh additional costs of implementing these strategies is uncertain. We conducted a systematic review to estimate the expenditures and savings of environmental interventions for asthma in the state of Maryland. Direct costs included hospitalizations, emergency room, and clinic visits. Indirect expenditures included costs of lost work productivity and travel incurred during the usage of healthcare services. We used decision analysis, assuming a hypothetical cohort of the approximated 49,290 pediatric individuals in Maryland with persistent asthma, to compare costs and benefits of environmental asthma interventions against the standard of care (no intervention) from the societal perspective. Three interventions among nine articles met the inclusion criteria for the systematic review: 1) environmental education using medical professionals; 2) education using non-medical personnel; and 3) multi-component strategy involving education with non-medical personnel, allergen-impermeable covers, and pest management. All interventions were found to be cost-saving relative to the standard of care. Home environmental education using non-medical professionals yielded the highest net savings of $14.1 million (95% simulation interval (SI): $-.283 million, $19.4 million), while the multi-component intervention resulted in the lowest net savings of $8.1 million (95% SI: $-4.9 million, $15.9 million). All strategies were most sensitive to the baseline number of hospitalizations in those not receiving targeted interventions for asthma. Limited environmental reduction strategies for asthma are likely to be cost-saving to the healthcare system in Maryland and should be considered for broader scale-up in other economically similar settings.

  18. Participatory approaches to understanding practices of flood management across borders

    NASA Astrophysics Data System (ADS)

    Bracken, L. J.; Forrester, J.; Oughton, E. A.; Cinderby, S.; Donaldson, A.; Anness, L.; Passmore, D.

    2012-04-01

    The aim of this paper is to outline and present initial results from a study designed to identify principles of and practices for adaptive co-management strategies for resilience to flooding in borderlands using participatory methods. Borderlands are the complex and sometimes undefined spaces existing at the interface of different territories and draws attention towards messy connections and disconnections (Strathern 2004; Sassen 2006). For this project the borderlands concerned are those between professional and lay knowledge, between responsible agencies, and between one nation and another. Research was focused on the River Tweed catchment, located on the Scottish-English border. This catchment is subject to complex environmental designations and rural development regimes that make integrated management of the whole catchment difficult. A multi-method approach was developed using semi-structured interviews, Q methodology and participatory GIS in order to capture wide ranging practices for managing flooding, the judgements behind these practices and to 'scale up' participation in the study. Professionals and local experts were involved in the research. The methodology generated a useful set of options for flood management, with research outputs easily understood by key management organisations and the wider public alike. There was a wide endorsement of alternative flood management solutions from both managers and local experts. The role of location was particularly important for ensuring communication and data sharing between flood managers from different organisations and more wide ranging stakeholders. There were complex issues around scale; both the mismatch between communities and evidence of flooding and the mismatch between governance and scale of intervention for natural flood management. The multi-method approach was essential in capturing practice and the complexities around governance of flooding. The involvement of key flood management organisations was integral to making the research of relevance to professionals.

  19. Understanding hydraulic fracturing: a multi-scale problem.

    PubMed

    Hyman, J D; Jiménez-Martínez, J; Viswanathan, H S; Carey, J W; Porter, M L; Rougier, E; Karra, S; Kang, Q; Frash, L; Chen, L; Lei, Z; O'Malley, D; Makedonska, N

    2016-10-13

    Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages.This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).

  20. Understanding hydraulic fracturing: a multi-scale problem

    PubMed Central

    Hyman, J. D.; Jiménez-Martínez, J.; Viswanathan, H. S.; Carey, J. W.; Porter, M. L.; Rougier, E.; Karra, S.; Kang, Q.; Frash, L.; Chen, L.; Lei, Z.; O’Malley, D.; Makedonska, N.

    2016-01-01

    Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597789

  1. Foreign Language Teachers' Professional Development in Information Age

    NASA Astrophysics Data System (ADS)

    Fan, Xiying; Wu, Gang

    Cultivation of students' learning autonomy has raised new challenges to teachers' professional development, dynamic, continuous, lifelong full-scale development, with emphasis on the creativity and constancy of the teachers' quality development. The teachers' professional development can take the following approaches: studying theories about foreign language teaching with the aid of modern information technology; organizing online teaching research activities supported by information technology and carrying peer observation and dialogue -teaching reflection in internet environment and fostering scholarly teachers.

  2. On the Scaling Laws for Jet Noise in Subsonic and Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Vu, Bruce; Kandula, Max

    2003-01-01

    The scaling laws for the simulation of noise from subsonic and ideally expanded supersonic jets are examined with regard to their applicability to deduce full scale conditions from small-scale model testing. Important parameters of scale model testing for the simulation of jet noise are identified, and the methods of estimating full-scale noise levels from simulated scale model data are addressed. The limitations of cold-jet data in estimating high-temperature supersonic jet noise levels are discussed. It is shown that the jet Mach number (jet exit velocity/sound speed at jet exit) is a more general and convenient parameter for noise scaling purposes than the ratio of jet exit velocity to ambient speed of sound. A similarity spectrum is also proposed, which accounts for jet Mach number, angle to the jet axis, and jet density ratio. The proposed spectrum reduces nearly to the well-known similarity spectra proposed by Tam for the large-scale and the fine-scale turbulence noise in the appropriate limit.

  3. Wavelet-based time series bootstrap model for multidecadal streamflow simulation using climate indicators

    NASA Astrophysics Data System (ADS)

    Erkyihun, Solomon Tassew; Rajagopalan, Balaji; Zagona, Edith; Lall, Upmanu; Nowak, Kenneth

    2016-05-01

    A model to generate stochastic streamflow projections conditioned on quasi-oscillatory climate indices such as Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) is presented. Recognizing that each climate index has underlying band-limited components that contribute most of the energy of the signals, we first pursue a wavelet decomposition of the signals to identify and reconstruct these features from annually resolved historical data and proxy based paleoreconstructions of each climate index covering the period from 1650 to 2012. A K-Nearest Neighbor block bootstrap approach is then developed to simulate the total signal of each of these climate index series while preserving its time-frequency structure and marginal distributions. Finally, given the simulated climate signal time series, a K-Nearest Neighbor bootstrap is used to simulate annual streamflow series conditional on the joint state space defined by the simulated climate index for each year. We demonstrate this method by applying it to simulation of streamflow at Lees Ferry gauge on the Colorado River using indices of two large scale climate forcings: Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO), which are known to modulate the Colorado River Basin (CRB) hydrology at multidecadal time scales. Skill in stochastic simulation of multidecadal projections of flow using this approach is demonstrated.

  4. Large Scale Document Inversion using a Multi-threaded Computing System

    PubMed Central

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701

  5. Large Scale Document Inversion using a Multi-threaded Computing System.

    PubMed

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  6. Mega-Scale Simulation of Multi-Layer Devices-- Formulation, Kinetics, and Visualization

    DTIC Science & Technology

    1994-07-28

    prototype code STRIDE, also initially developed under ARO support. The focus of the ARO supported research activities has been in the areas of multi ... FORTRAN -77. During its fifteen-year life- span several generations of researchers have modified the code . Due to this continual develop- ment, the...behavior. The replacement of the linear solver had no effect on the remainder of the code . We replaced the existing solver with a distributed multi -frontal

  7. Status and future of MUSE

    NASA Astrophysics Data System (ADS)

    Harfst, S.; Portegies Zwart, S.; McMillan, S.

    2008-12-01

    We present MUSE, a software framework for combining existing computational tools from different astrophysical domains into a single multi-physics, multi-scale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a ``Noah's Ark'' milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe two examples calculated using MUSE: the merger of two galaxies and an N-body simulation with live stellar evolution. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.

  8. Toward an operational tool to simulate green roof hydrological impact at the basin scale: a new version of the distributed rainfall-runoff model Multi-Hydro.

    PubMed

    Versini, Pierre-Antoine; Gires, Auguste; Tchinguirinskaia, Ioulia; Schertzer, Daniel

    2016-10-01

    Currently widespread in new urban projects, green roofs have shown a positive impact on urban runoff at the building scale: decrease and slow-down of the peak discharge, and decrease of runoff volume. The present work aims to study their possible impact at the catchment scale, more compatible with stormwater management issues. For this purpose, a specific module dedicated to simulating the hydrological behaviour of a green roof has been developed in the distributed rainfall-runoff model (Multi-Hydro). It has been applied on a French urban catchment where most of the building roofs are flat and assumed to accept the implementation of a green roof. Catchment responses to several rainfall events covering a wide range of meteorological situations have been simulated. The simulation results show green roofs can significantly reduce runoff volume and the magnitude of peak discharge (up to 80%) depending on the rainfall event and initial saturation of the substrate. Additional tests have been made to assess the susceptibility of this response regarding both spatial distributions of green roofs and precipitation. It appears that the total area of greened roofs is more important than their locations. On the other hand, peak discharge reduction seems to be clearly dependent on spatial distribution of precipitation.

  9. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    PubMed

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  10. Multi-Scale Sizing of Lightweight Multifunctional Spacecraft Structural Components

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.

    2005-01-01

    This document is the final report for the project entitled, "Multi-Scale Sizing of Lightweight Multifunctional Spacecraft Structural Components," funded under the NRA entitled "Cross-Enterprise Technology Development Program" issued by the NASA Office of Space Science in 2000. The project was funded in 2001, and spanned a four year period from March, 2001 to February, 2005. Through enhancements to and synthesis of unique, state of the art structural mechanics and micromechanics analysis software, a new multi-scale tool has been developed that enables design, analysis, and sizing of advance lightweight composite and smart materials and structures from the full vehicle, to the stiffened structure, to the micro (fiber and matrix) scales. The new software tool has broad, cross-cutting value to current and future NASA missions that will rely on advanced composite and smart materials and structures.

  11. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    NASA Astrophysics Data System (ADS)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  12. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Consensus of Multi-Agent Systems with Prestissimo Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Yang, Hong-Yong; Lu, Lan; Cao, Ke-Cai; Zhang, Si-Ying

    2010-04-01

    In this paper, the relations of the network topology and the moving consensus of multi-agent systems are studied. A consensus-prestissimo scale-free network model with the static preferential-consensus attachment is presented on the rewired link of the regular network. The effects of the static preferential-consensus BA network on the algebraic connectivity of the topology graph are compared with the regular network. The robustness gain to delay is analyzed for variable network topology with the same scale. The time to reach the consensus is studied for the dynamic network with and without communication delays. By applying the computer simulations, it is validated that the speed of the convergence of multi-agent systems can be greatly improved in the preferential-consensus BA network model with different configuration.

  13. Evaluating impacts of different longitudinal driver assistance systems on reducing multi-vehicle rear-end crashes during small-scale inclement weather.

    PubMed

    Li, Ye; Xing, Lu; Wang, Wei; Wang, Hao; Dong, Changyin; Liu, Shanwen

    2017-10-01

    Multi-vehicle rear-end (MVRE) crashes during small-scale inclement (SSI) weather cause high fatality rates on freeways, which cannot be solved by traditional speed limit strategies. This study aimed to reduce MVRE crash risks during SSI weather using different longitudinal driver assistance systems (LDAS). The impact factors on MVRE crashes during SSI weather were firstly analyzed. Then, four LDAS, including Forward collision warning (FCW), Autonomous emergency braking (AEB), Adaptive cruise control (ACC) and Cooperative ACC (CACC), were modeled based on a unified platform, the Intelligent Driver Model (IDM). Simulation experiments were designed and a large number of simulations were then conducted to evaluate safety effects of different LDAS. Results indicate that the FCW and ACC system have poor performance on reducing MVRE crashes during SSI weather. The slight improvement of sight distance of FCW and the limitation of perception-reaction time of ACC lead the failure of avoiding MVRE crashes in most scenarios. The AEB system has the better effect due to automatic perception and reaction, as well as performing the full brake when encountering SSI weather. The CACC system has the best performance because wireless communication provides a larger sight distance and a shorter time delay at the sub-second level. Sensitivity analyses also indicated that the larger number of vehicles and speed changes after encountering SSI weather have negative impacts on safety performances. Results of this study provide useful information for accident prevention during SSI weather. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Multithreaded Stochastic PDES for Reactions and Diffusions in Neurons.

    PubMed

    Lin, Zhongwei; Tropper, Carl; Mcdougal, Robert A; Patoary, Mohammand Nazrul Ishlam; Lytton, William W; Yao, Yiping; Hines, Michael L

    2017-07-01

    Cells exhibit stochastic behavior when the number of molecules is small. Hence a stochastic reaction-diffusion simulator capable of working at scale can provide a more accurate view of molecular dynamics within the cell. This paper describes a parallel discrete event simulator, Neuron Time Warp-Multi Thread (NTW-MT), developed for the simulation of reaction diffusion models of neurons. To the best of our knowledge, this is the first parallel discrete event simulator oriented towards stochastic simulation of chemical reactions in a neuron. The simulator was developed as part of the NEURON project. NTW-MT is optimistic and thread-based, which attempts to capitalize on multi-core architectures used in high performance machines. It makes use of a multi-level queue for the pending event set and a single roll-back message in place of individual anti-messages to disperse contention and decrease the overhead of processing rollbacks. Global Virtual Time is computed asynchronously both within and among processes to get rid of the overhead for synchronizing threads. Memory usage is managed in order to avoid locking and unlocking when allocating and de-allocating memory and to maximize cache locality. We verified our simulator on a calcium buffer model. We examined its performance on a calcium wave model, comparing it to the performance of a process based optimistic simulator and a threaded simulator which uses a single priority queue for each thread. Our multi-threaded simulator is shown to achieve superior performance to these simulators. Finally, we demonstrated the scalability of our simulator on a larger CICR model and a more detailed CICR model.

  15. Assessment of long-term WRF–CMAQ simulations for understanding direct aerosol effects on radiation "brightening" in the United States

    EPA Science Inventory

    Long-term simulations with the coupled WRF–CMAQ (Weather Research and Forecasting–Community Multi-scale Air Quality) model have been conducted to systematically investigate the changes in anthropogenic emissions of SO2 and NOx over the past 16 years (1995–2010) ...

  16. Process and Learning Outcomes from Remotely-Operated, Simulated, and Hands-on Student Laboratories

    ERIC Educational Resources Information Center

    Corter, James E.; Esche, Sven K.; Chassapis, Constantin; Ma, Jing; Nickerson, Jeffrey V.

    2011-01-01

    A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course. Students (N = 458) worked in small-group lab teams to perform two experiments involving stress on a cantilever beam. Each team conducted the…

  17. Dynamic Evaluation of Two Decades of WRF-CMAQ Ozone Simulations over the Contiguous United States (2017 CMAS)

    EPA Science Inventory

    Weather Research and Forecasting (WRF)–Community Multi-scale Air Quality (CMAQ) model over the contiguous United States is conducted to assess how well the changes in observed ozone air quality are simulated by the model. The changes induced by variations in meteorology and...

  18. Dynamic Evaluation of Two Decades of WRF-CMAQ Ozone Simulations over the Contiguous United States (2017 MAC-MAQ Conference Presentation)

    EPA Science Inventory

    Dynamic evaluation of two decades of ozone simulations performed with the fully coupled Weather Research and Forecasting (WRF)–Community Multi-scale Air Quality (CMAQ) model over the contiguous United States is conducted to assess how well the changes in observed ozone air ...

  19. Continental-scale temperature covariance in proxy reconstructions and climate models

    NASA Astrophysics Data System (ADS)

    Hartl-Meier, Claudia; Büntgen, Ulf; Smerdon, Jason; Zorita, Eduardo; Krusic, Paul; Ljungqvist, Fredrik; Schneider, Lea; Esper, Jan

    2017-04-01

    Inter-continental temperature variability over the past millennium has been reported to be more coherent in climate model simulations than in multi-proxy-based reconstructions, a finding that undermines the representation of spatial variability in either of these approaches. We assess the covariance of summer temperatures among Northern Hemisphere continents by comparing tree-ring based temperature reconstructions with state-of-the-art climate model simulations over the past millennium. We find inter-continental temperature covariance to be larger in tree-ring-only reconstructions compared to those derived from multi-proxy networks, thus enhancing the agreement between proxy- and model-based spatial representations. A detailed comparison of simulated temperatures, however, reveals substantial spread among the models. Over the past millennium, inter-continental temperature correlations are driven by the cooling after major volcanic eruptions in 1257, 1452, 1601, and 1815. The coherence of these synchronizing events appears to be elevated in several climate simulations relative to their own covariance baselines and the proxy reconstructions, suggesting these models overestimate the amplitude of cooling in response to volcanic forcing at large spatial scales.

  20. Simulation of Degraded Properties of 2D plain Woven C/SiC Composites under Preloading Oxidation Atmosphere

    NASA Astrophysics Data System (ADS)

    Chen, Xihui; Sun, Zhigang; Sun, Jianfen; Song, Yingdong

    2017-12-01

    In this paper, a numerical model which incorporates the oxidation damage model and the finite element model of 2D plain woven composites is presented for simulation of the oxidation behaviors of 2D plain woven C/SiC composite under preloading oxidation atmosphere. The equal proportional reduction method is firstly proposed to calculate the residual moduli and strength of unidirectional C/SiC composite. The multi-scale method is developed to simulate the residual elastic moduli and strength of 2D plain woven C/SiC composite. The multi-scale method is able to accurately predict the residual elastic modulus and strength of the composite. Besides, the simulated residual elastic moduli and strength of 2D plain woven C/SiC composites under preloading oxidation atmosphere show good agreements with experimental results. Furthermore, the preload, oxidation time, temperature and fiber volume fractions of the composite are investigated to show their influences upon the residual elastic modulus and strength of 2D plain woven C/SiC composites.

  1. The pivotal role of angiogenesis in a multi-scale modeling of tumor growth exhibiting the avascular and vascular phases.

    PubMed

    Salavati, Hooman; Soltani, M; Amanpour, Saeid

    2018-05-06

    The mechanisms involved in tumor growth mainly occur at the microenvironment, where the interactions between the intracellular, intercellular and extracellular scales mediate the dynamics of tumor. In this work, we present a multi-scale model of solid tumor dynamics to simulate the avascular and vascular growth as well as tumor-induced angiogenesis. The extracellular and intercellular scales are modeled using partial differential equations and cellular Potts model, respectively. Also, few biochemical and biophysical rules control the dynamics of intracellular level. On the other hand, the growth of melanoma tumors is modeled in an animal in-vivo study to evaluate the simulation. The simulation shows that the model successfully reproduces a completed image of processes involved in tumor growth such as avascular and vascular growth as well as angiogenesis. The model incorporates the phenotypes of cancerous cells including proliferating, quiescent and necrotic cells, as well as endothelial cells during angiogenesis. The results clearly demonstrate the pivotal effect of angiogenesis on the progression of cancerous cells. Also, the model exhibits important events in tumor-induced angiogenesis like anastomosis. Moreover, the computational trend of tumor growth closely follows the observations in the experimental study. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniquesmore » in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.« less

  3. PKI security in large-scale healthcare networks.

    PubMed

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  4. A Harder Rain is Going to Fall: Challenges for Actionable Projections of Extremes

    NASA Astrophysics Data System (ADS)

    Collins, W.

    2014-12-01

    Hydrometeorological extremes are projected to increase in both severity and frequency as the Earth's surface continues to warm in response to anthropogenic emissions of greenhouse gases. These extremes will directly affect the availability and reliability of water and other critical resources. The most comprehensive suite of multi-model projections has been assembled under the Coupled Model Intercomparison Project version 5 (CMIP5) and assessed in the Fifth Assessment (AR5) of the Intergovernmental Panel on Climate Change (IPCC). In order for these projections to be actionable, the projections should exhibit consistency and fidelity down to the local length and timescales required for operational resource planning, for example the scales relevant for water allocations from a major watershed. In this presentation, we summarize the length and timescales relevant for resource planning and then use downscaled versions of the IPCC simulations over the contiguous United States to address three questions. First, over what range of scales is there quantitative agreement between the simulated historical extremes and in situ measurements? Second, does this range of scales in the historical and future simulations overlap with the scales relevant for resource management and adaptation? Third, does downscaling enhance the degree of multi-model consistency at scales smaller than the typical global model resolution? We conclude by using these results to highlight requirements for further model development to make the next generation of models more useful for planning purposes.

  5. A quantum wave based compact modeling approach for the current in ultra-short DG MOSFETs suitable for rapid multi-scale simulations

    NASA Astrophysics Data System (ADS)

    Hosenfeld, Fabian; Horst, Fabian; Iñíguez, Benjamín; Lime, François; Kloes, Alexander

    2017-11-01

    Source-to-drain (SD) tunneling decreases the device performance in MOSFETs falling below the 10 nm channel length. Modeling quantum mechanical effects including SD tunneling has gained more importance specially for compact model developers. The non-equilibrium Green's function (NEGF) has become a state-of-the-art method for nano-scaled device simulation in the past years. In the sense of a multi-scale simulation approach it is necessary to bridge the gap between compact models with their fast and efficient calculation of the device current, and numerical device models which consider quantum effects of nano-scaled devices. In this work, an NEGF based analytical model for nano-scaled double-gate (DG) MOSFETs is introduced. The model consists of a closed-form potential solution of a classical compact model and a 1D NEGF formalism for calculating the device current, taking into account quantum mechanical effects. The potential calculation omits the iterative coupling and allows the straightforward current calculation. The model is based on a ballistic NEGF approach whereby backscattering effects are considered as second order effect in a closed-form. The accuracy and scalability of the non-iterative DG MOSFET model is inspected in comparison with numerical NanoMOS TCAD data for various channel lengths. With the help of this model investigations on short-channel and temperature effects are performed.

  6. Multi-scale simulations of black hole accretion in barred galaxies. Self-gravitating disk models

    NASA Astrophysics Data System (ADS)

    Jung, M.; Illenseer, T. F.; Duschl, W. J.

    2018-06-01

    Due to the non-axisymmetric potential of the central bar, in addition to their characteristic arms and bar, barred spiral galaxies form a variety of structures within the thin gas disk, such as nuclear rings, inner spirals, and dust lanes. These structures in the inner kiloparsec are extremely important in order to explain and understand the rate of black hole feeding. The aim of this work is to investigate the influence of stellar bars in spiral galaxies on the thin self-gravitating gas disk. We focus on the accretion of gas onto the central supermassive black hole and its time-dependent evolution. We conducted multi-scale simulations simultaneously resolving the galactic disk and the accretion disk around the central black hole. In all the simulations we varied the initial gas disk mass. As an additional parameter we chose either the gas temperature for isothermal simulations or the cooling timescale for non-isothermal simulations. Accretion was either driven by a gravitationally unstable or clumpy accretion disk or by energy dissipation in strong shocks. Most of the simulations show a strong dependence of the accretion rate at the outer boundary of the central accretion disk (r < 300 pc) on the gas flow at kiloparsec scales. The final black hole masses reach up to 109 M⊙ after 1.6 Gyr. Our models show the expected influence of the Eddington limit and a decline in growth rate at the corresponding sub-Eddington limit.

  7. Advances and issues from the simulation of planetary magnetospheres with recent supercomputer systems

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2016-12-01

    Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we will show the latest simulation results using the petascale supercomputer and problems from the use of these supercomputer systems.

  8. Cooperation of return-to-work professionals: the challenges of multi-actor work disability management.

    PubMed

    Liukko, Jyri; Kuuva, Niina

    2017-07-01

    This article explores which concrete factors hinder or facilitate the cooperation of return-to-work (RTW) professionals in a complex system of multiple stakeholders. The empirical material consists of in-depth interviews with 24 RTW professionals from various organizations involved in work disability management in Finland. The interviews were analyzed using thematic content analysis. The study revealed several kinds of challenges in the cooperation of the professionals. These were related to two partly interrelated themes: communication and distribution of responsibility. The most difficult problems were connected to the cooperation between public employment offices and other stakeholders. However, the study distinguished notable regional differences depending primarily on the scale of the local network. The main areas of improvement proposed by the interviewees were related to better networking of case managers and expansion of expertise. The article argues for the importance of systematic networking and stresses the role of public employment services in the multi-actor management of work disabilities. The article contributes to existing work disability case management models by suggesting the employment administration system as an important component in addition to health care, workplace and insurance systems. The study also highlights the need for expansion of expertise in the field. Implications for Rehabilitation Cooperation between RTW professionals in public employment offices and other organizations involved in work disability management was considered inadequate. In order to improve the cooperation of RTW professionals, the stakeholders need to create more systematic ways of communication and networking with professionals in other organizations. There is a need to expand the expertise in work disability management and rehabilitation, partly by increasing the role of other professionals than physicians.

  9. A real-time multi-scale 2D Gaussian filter based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin

    2014-11-01

    Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.

  10. Assessing medical students' performance in core competencies using multiple admission programs for colleges and universities: from the perspective of multi-source feedback.

    PubMed

    Fang, Ji-Tseng; Ko, Yu-Shien; Chien, Chu-Chun; Yu, Kuang-Hui

    2013-01-01

    Since 1994, Taiwanese medical universities have employed the multiple application method comprising "recommendations and screening" and "admission application." The purpose of this study is to examine whether medical students admitted using different admission programs gave different performances. To evaluate the six core competencies for medical students proposed by Accreditation Council for Graduate Medical Education (ACGME), this study employed various assessment tools, including student opinion feedback, multi-source feedback (MSF), course grades, and examination results.MSF contains self-assessment scale, peer assessment scale, nursing staff assessment scale, visiting staff assessment scale, and chief resident assessment scale. In the subscales, the CronbachÊs alpha were higher than 0.90, indicating good reliability. Research participants consisted of 182 students from the School of Medicine at Chang Gung University. Regarding studentsÊ average grade for the medical ethics course, the performance of students who were enrolled through school recommendations exceeded that of students who were enrolled through the National College University Entrance Examination (NCUEE) p = 0.011), and all considered "teamwork" as the most important. Different entry pipelines of students in the "communication," "work attitude," "medical knowledge," and "teamwork" assessment scales showed no significant difference. The improvement rate of the students who were enrolled through the school recommendations was better than that of the students who were enrolled through the N CUEE in the "professional skills," "medical core competencies," "communication," and "teamwork" projects of self-assessment and peer assessment scales. However, the students who were enrolled through the NCUEE were better in the "professional skills," "medical core competencies," "communication," and "teamwork" projects of the visiting staff assessment scale and the chief resident assessment scale. Collectively, the performance of the students enrolled through recommendations was slightly better than that of the students enrolled through the NCUEE, although statistical significance was found in certain parts of the grades only.

  11. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGES

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  12. Modeling of ultrasonic wave propagation in composite laminates with realistic discontinuity representation.

    PubMed

    Zelenyak, Andreea-Manuela; Schorer, Nora; Sause, Markus G R

    2018-02-01

    This paper presents a method for embedding realistic defect geometries of a fiber reinforced material in a finite element modeling environment in order to simulate active ultrasonic inspection. When ultrasonic inspection is used experimentally to investigate the presence of defects in composite materials, the microscopic defect geometry may cause signal characteristics that are difficult to interpret. Hence, modeling of this interaction is key to improve our understanding and way of interpreting the acquired ultrasonic signals. To model the true interaction of the ultrasonic wave field with such defect structures as pores, cracks or delamination, a realistic three dimensional geometry reconstruction is required. We present a 3D-image based reconstruction process which converts computed tomography data in adequate surface representations ready to be embedded for processing with finite element methods. Subsequent modeling using these geometries uses a multi-scale and multi-physics simulation approach which results in quantitative A-Scan ultrasonic signals which can be directly compared with experimental signals. Therefore, besides the properties of the composite material, a full transducer implementation, piezoelectric conversion and simultaneous modeling of the attached circuit is applied. Comparison between simulated and experimental signals provides very good agreement in electrical voltage amplitude and the signal arrival time and thus validates the proposed modeling approach. Simulating ultrasound wave propagation in a medium with a realistic shape of the geometry clearly shows a difference in how the disturbance of the waves takes place and finally allows more realistic modeling of A-scans. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Source Attribution of Near-surface Ozone in the Western US: Improved Estimates by TF HTAP2 Multi-model Experiment and Multi-scale Chemical Data Assimilation

    NASA Astrophysics Data System (ADS)

    Huang, M.; Bowman, K. W.; Carmichael, G. R.; Lee, M.; Park, R.; Henze, D. K.; Chai, T.; Flemming, J.; Lin, M.; Weinheimer, A. J.; Wisthaler, A.; Jaffe, D. A.

    2014-12-01

    Near-surface ozone in the western US can be sensitive to transported background pollutants from the free troposphere over the eastern Pacific, as well as various local emissions sources. Accurately estimating ozone source contributions in this region has strong policy-relevant significance as the air quality standards tend to go down. Here we improve modeled contributions from local and non-local sources to western US ozone base on the HTAP2 (Task Force on Hemispheric Transport of Air Pollution) multi-model experiment, along with multi-scale chemical data assimilation. We simulate western US air quality using the STEM regional model on a 12 km horizontal resolution grid, during the NASA ARCTAS field campaign period in June 2008. STEM simulations use time-varying boundary conditions downscaled from global GEOS-Chem model simulations. Standard GEOS-Chem simulation overall underpredicted ozone at 1-5 km in the eastern Pacific, resulting in underestimated contributions from the transported background pollutants to surface ozone inland. These negative biases can be reduced by using the output from several global models that support the HTAP2 experiment, which all ran with the HTAP2 harmonized emission inventory and also calculated the contributions from east Asian anthropogenic emissions. We demonstrate that the biases in GEOS-Chem boundary conditions can be more efficiently reduced via assimilating satellite ozone profiles from the Tropospheric Emission Spectrometer (TES) instrument using the three dimensional variational (3D-Var) approach. Base upon these TES-constrained GEOS-Chem boundary conditions, we then update regional nitrogen dioxide and isoprene emissions in STEM through the four dimensional variational (4D-Var) assimilation of the Ozone Monitoring Instrument (OMI) nitrogen dioxide columns and the NASA DC-8 aircraft isoprene measurements. The 4D-Var assimilation spatially redistributed the emissions of nitrogen oxides and isoprene from various US sources, and in the meantime updated the modeled ozone and its US source contributions. Compared with available independent measurements (e.g., ozone observed on the DC-8 aircraft, and at EPA and Mt. Bachelor monitoring stations) during this period, modeled ozone fields after the multi-scale assimilation show overall improvement.

  14. Microstructure Design of Tempered Martensite by Atomistically Informed Full-Field Simulation: From Quenching to Fracture

    PubMed Central

    Borukhovich, Efim; Du, Guanxing; Stratmann, Matthias; Boeff, Martin; Shchyglo, Oleg; Hartmaier, Alexander; Steinbach, Ingo

    2016-01-01

    Martensitic steels form a material class with a versatile range of properties that can be selected by varying the processing chain. In order to study and design the desired processing with the minimal experimental effort, modeling tools are required. In this work, a full processing cycle from quenching over tempering to mechanical testing is simulated with a single modeling framework that combines the features of the phase-field method and a coupled chemo-mechanical approach. In order to perform the mechanical testing, the mechanical part is extended to the large deformations case and coupled to crystal plasticity and a linear damage model. The quenching process is governed by the austenite-martensite transformation. In the tempering step, carbon segregation to the grain boundaries and the resulting cementite formation occur. During mechanical testing, the obtained material sample undergoes a large deformation that leads to local failure. The initial formation of the damage zones is observed to happen next to the carbides, while the final damage morphology follows the martensite microstructure. This multi-scale approach can be applied to design optimal microstructures dependent on processing and materials composition. PMID:28773791

  15. A multi-scaled approach for simulating chemical reaction systems.

    PubMed

    Burrage, Kevin; Tian, Tianhai; Burrage, Pamela

    2004-01-01

    In this paper we give an overview of some very recent work, as well as presenting a new approach, on the stochastic simulation of multi-scaled systems involving chemical reactions. In many biological systems (such as genetic regulation and cellular dynamics) there is a mix between small numbers of key regulatory proteins, and medium and large numbers of molecules. In addition, it is important to be able to follow the trajectories of individual molecules by taking proper account of the randomness inherent in such a system. We describe different types of simulation techniques (including the stochastic simulation algorithm, Poisson Runge-Kutta methods and the balanced Euler method) for treating simulations in the three different reaction regimes: slow, medium and fast. We then review some recent techniques on the treatment of coupled slow and fast reactions for stochastic chemical kinetics and present a new approach which couples the three regimes mentioned above. We then apply this approach to a biologically inspired problem involving the expression and activity of LacZ and LacY proteins in E. coli, and conclude with a discussion on the significance of this work. Copyright 2004 Elsevier Ltd.

  16. NUMERICAL SIMULATION OF NANOINDENTATION AND PATCH CLAMP EXPERIMENTS ON MECHANOSENSITIVE CHANNELS OF LARGE CONDUCTANCE IN ESCHERICHIA COLI

    PubMed Central

    Tang, Yuye; Chen, Xi; Yoo, Jejoong; Yethiraj, Arun; Cui, Qiang

    2010-01-01

    A hierarchical simulation framework that integrates information from all-atom simulations into a finite element model at the continuum level is established to study the mechanical response of a mechanosensitive channel of large conductance (MscL) in bacteria Escherichia Coli (E.coli) embedded in a vesicle formed by the dipalmitoylphosphatidycholine (DPPC) lipid bilayer. Sufficient structural details of the protein are built into the continuum model, with key parameters and material properties derived from molecular mechanics simulations. The multi-scale framework is used to analyze the gating of MscL when the lipid vesicle is subjective to nanoindentation and patch clamp experiments, and the detailed structural transitions of the protein are obtained explicitly as a function of external load; it is currently impossible to derive such information based solely on all-atom simulations. The gating pathways of E.coli-MscL qualitatively agree with results from previous patch clamp experiments. The gating mechanisms under complex indentation-induced deformation are also predicted. This versatile hierarchical multi-scale framework may be further extended to study the mechanical behaviors of cells and biomolecules, as well as to guide and stimulate biomechanics experiments. PMID:21874098

  17. Studying the secondary coolant circuit rupture protection algorithm for the Novovoronezh NPP Unit 5 on a full-scale training simulator

    NASA Astrophysics Data System (ADS)

    Kharchenko, K. S.; Vitkovskii, I. L.

    2014-02-01

    Performance of the secondary coolant circuit rupture algorithm in different operating modes of the Novovoronezh NPP Unit 5 is considered by carrying out studies on a full-scale training simulator. The revealed shortcomings of the algorithm causing excessive actuations of the protection are pointed out, and recommendations for removing them are outlined.

  18. The Late Integrated Sachs-Wolfe Effect and its detectability in galaxy-redshift surveys

    NASA Astrophysics Data System (ADS)

    Valencia-Díaz, D. R.; Muñoz-Cuartas, J. C.

    2017-07-01

    The late Integrated Sachs-Wolfe (ISW) effect is underwent by the Cosmic Microwave Background (CMB) photons due to the presence of the Large-Scale Structures (LSS) in an expanding Universe and can be measured through the temperature fluctuations of the CMB. In this work we use numerical simulations of structure formation to study the detectability of the ISW effect. Our method comprises the estimation of the density field through a Cloud-In-Cell mass assignment scheme. With the help of Fourier transforms we estimate the time derivative of the gravitational potential field in Fourier and in coordinate's space. Finally, this field is integrated numerically to know the ISW contribution. We study the time derivative of the potential in two approaches. First, an exact solution that makes use of the full velocity field. Second, a linear approximation related with the linear theory for the formation of LSS. We apply the method to three cosmological simulations. First, a box of 400 h-1 Mpc; second, the MultiDark1 simulation; third, the MultiDark-Plank simulation. For all cases we obtain coherent results with the expected in the literature for a ΛCDM cosmology: with the exact solution the temperature fluctuation is near the ± 30 μ K; the linear approximation shows a signal in the expected range of ± 20 μ K. This positive detection on simulations is important in order to know an expectation for the results we should obtain when working with observational data and will have important implications due to the lack of consensus about the detection of the ISW effect in previous works. Acknowledgements: This work was supported by Colciencias and Universidad de Antioquia, Convenio Beca-Pasantía Joven Investigador Convocatoria 645 de 2014.

  19. Scaling NS-3 DCE Experiments on Multi-Core Servers

    DTIC Science & Technology

    2016-06-15

    that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on

  20. [What is the attitude of doctors to the current model of primary care?].

    PubMed

    Llor Esteban, B; Saturno Hernández, P J; Gascón Canovas, J J; Sáez Navarro, C; Sánchez Ortuño, M

    2001-11-30

    To determine the attitude of doctors towards the current model of primary care and to calculate its relationship with social and demographic and/or work variables. Multi-centre cross-sectional study. Health centres in Area II of the Murcia region. Participants. All general practitioners, family doctors and paediatricians in the health centres mentioned (54 in all). The "Scale of attitudes towards the contents of primary health care" by Ballesteros et al. was used as the tool of evaluation. This scale provides both a total score and a specific score for each of its 7 dimensions. In general, doctors' attitudes were favourable (4.1 points average out of 5). We found a less favourable attitude in the dimension "Inclusion of second-level professionals in primary care", with family doctors most in agreement. The professionals working in centres on the periphery and those without tenure had a more positive attitude towards the current model, for the remaining variables. Understanding professionals' attitudes and the variables related to them may serve as a basis for designing intervention strategies aimed at improving the quality of primary care and for the positive evolution of professionals working in PC.

  1. Staff regard towards working with substance users: a European multi-centre study.

    PubMed

    Gilchrist, Gail; Moskalewicz, Jacek; Slezakova, Silvia; Okruhlica, Lubomir; Torrens, Marta; Vajd, Rajko; Baldacchino, Alex

    2011-06-01

    To compare regard for working with different patient groups (including substance users) among different professional groups in different health-care settings in eight European countries. A multi-centre, cross-sectional comparative study. Primary care, general psychiatry and specialist addiction services in Bulgaria, Greece, Italy, Poland, Scotland, Slovakia, Slovenia and Spain. A multi-disciplinary convenience sample of 866 professionals (physicians, psychiatrists, psychologists, nurses and social workers) from 253 services. The Medical Condition Regard Scale measured regard for working with different patient groups. Multi-factor between-subjects analysis of variance determined the factors associated with regard for each condition by country and all countries. Regard for working with alcohol (mean score alcohol: 45.35, 95% CI 44.76, 45.95) and drug users (mean score drugs: 43.67, 95% CI 42.98, 44.36) was consistently lower than for other patient groups (mean score diabetes: 50.19, 95% CI 49.71, 50.66; mean score depression: 51.34, 95% CI 50.89, 51.79) across all countries participating in the study, particularly among staff from primary care compared to general psychiatry or specialist addiction services (P<0.001). After controlling for sex of staff, profession and duration of time working in profession, treatment entry point and country remained the only statistically significant variables associated with regard for working with alcohol and drug users. Health professionals appear to ascribe lower status to working with substance users than helping other patient groups, particularly in primary care; the effect is larger in some countries than others. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.

  2. Nationwide Buildings Energy Research enabled through an integrated Data Intensive Scientific Workflow and Advanced Analysis Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleese van Dam, Kerstin; Lansing, Carina S.; Elsethagen, Todd O.

    2014-01-28

    Modern workflow systems enable scientists to run ensemble simulations at unprecedented scales and levels of complexity, allowing them to study system sizes previously impossible to achieve, due to the inherent resource requirements needed for the modeling work. However as a result of these new capabilities the science teams suddenly also face unprecedented data volumes that they are unable to analyze with their existing tools and methodologies in a timely fashion. In this paper we will describe the ongoing development work to create an integrated data intensive scientific workflow and analysis environment that offers researchers the ability to easily create andmore » execute complex simulation studies and provides them with different scalable methods to analyze the resulting data volumes. The integration of simulation and analysis environments is hereby not only a question of ease of use, but supports fundamental functions in the correlated analysis of simulation input, execution details and derived results for multi-variant, complex studies. To this end the team extended and integrated the existing capabilities of the Velo data management and analysis infrastructure, the MeDICi data intensive workflow system and RHIPE the R for Hadoop version of the well-known statistics package, as well as developing a new visual analytics interface for the result exploitation by multi-domain users. The capabilities of the new environment are demonstrated on a use case that focusses on the Pacific Northwest National Laboratory (PNNL) building energy team, showing how they were able to take their previously local scale simulations to a nationwide level by utilizing data intensive computing techniques not only for their modeling work, but also for the subsequent analysis of their modeling results. As part of the PNNL research initiative PRIMA (Platform for Regional Integrated Modeling and Analysis) the team performed an initial 3 year study of building energy demands for the US Eastern Interconnect domain, which they are now planning to extend to predict the demand for the complete century. The initial study raised their data demands from a few GBs to 400GB for the 3year study and expected tens of TBs for the full century.« less

  3. Using the ePortfolio to Complement Standardized Testing in a Healthcare Professional Program: Better Education or More Busy Work?

    ERIC Educational Resources Information Center

    Chan, Clarence

    2012-01-01

    This article evaluates the full-scale integration of the ePortfolio into a healthcare professional program in an open admissions community college in the United States. The Physical Therapist Assistant program in question struggles to balance the dynamic tension between preparing students for a summative multiple-choice licensing examination and…

  4. A numerical multi-scale model to predict macroscopic material anisotropy of multi-phase steels from crystal plasticity material definitions

    NASA Astrophysics Data System (ADS)

    Ravi, Sathish Kumar; Gawad, Jerzy; Seefeldt, Marc; Van Bael, Albert; Roose, Dirk

    2017-10-01

    A numerical multi-scale model is being developed to predict the anisotropic macroscopic material response of multi-phase steel. The embedded microstructure is given by a meso-scale Representative Volume Element (RVE), which holds the most relevant features like phase distribution, grain orientation, morphology etc., in sufficient detail to describe the multi-phase behavior of the material. A Finite Element (FE) mesh of the RVE is constructed using statistical information from individual phases such as grain size distribution and ODF. The material response of the RVE is obtained for selected loading/deformation modes through numerical FE simulations in Abaqus. For the elasto-plastic response of the individual grains, single crystal plasticity based plastic potential functions are proposed as Abaqus material definitions. The plastic potential functions are derived using the Facet method for individual phases in the microstructure at the level of single grains. The proposed method is a new modeling framework and the results presented in terms of macroscopic flow curves are based on the building blocks of the approach, while the model would eventually facilitate the construction of an anisotropic yield locus of the underlying multi-phase microstructure derived from a crystal plasticity based framework.

  5. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less

  6. Hybrid stochastic simplifications for multiscale gene networks.

    PubMed

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-09-07

    Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  7. Thin film growth by 3D multi-particle diffusion limited aggregation model: Anomalous roughening and fractal analysis

    NASA Astrophysics Data System (ADS)

    Nasehnejad, Maryam; Nabiyouni, G.; Gholipour Shahraki, Mehran

    2018-03-01

    In this study a 3D multi-particle diffusion limited aggregation method is employed to simulate growth of rough surfaces with fractal behavior in electrodeposition process. A deposition model is used in which the radial motion of the particles with probability P, competes with random motions with probability 1 - P. Thin films growth is simulated for different values of probability P (related to the electric field) and thickness of the layer(related to the number of deposited particles). The influence of these parameters on morphology, kinetic of roughening and the fractal dimension of the simulated surfaces has been investigated. The results show that the surface roughness increases with increasing the deposition time and scaling exponents exhibit a complex behavior which is called as anomalous scaling. It seems that in electrodeposition process, radial motion of the particles toward the growing seeds may be an important mechanism leading to anomalous scaling. The results also indicate that the larger values of probability P, results in smoother topography with more densely packed structure. We have suggested a dynamic scaling ansatz for interface width has a function of deposition time, scan length and probability. Two different methods are employed to evaluate the fractal dimension of the simulated surfaces which are "cube counting" and "roughness" methods. The results of both methods show that by increasing the probability P or decreasing the deposition time, the fractal dimension of the simulated surfaces is increased. All gained values for fractal dimensions are close to 2.5 in the diffusion limited aggregation model.

  8. Large scale cardiac modeling on the Blue Gene supercomputer.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J

    2008-01-01

    Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.

  9. Assessing self-care and social function using a computer adaptive testing version of the pediatric evaluation of disability inventory.

    PubMed

    Coster, Wendy J; Haley, Stephen M; Ni, Pengsheng; Dumas, Helene M; Fragala-Pinkham, Maria A

    2008-04-01

    To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the self-care and social function scales of the Pediatric Evaluation of Disability Inventory compared with the full-length version of these scales. Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children's homes. Children with disabilities (n=469) and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Not applicable. Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length self-care and social function scales; time (in seconds) to complete assessments and respondent ratings of burden. Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (r range, .94-.99). Using computer simulation of retrospective data, discriminant validity, and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared with over 16 minutes to complete the full-length scales. Self-care and social function score estimates from CAT administration are highly comparable with those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time.

  10. Effect of thematic map misclassification on landscape multi-metric assessment.

    PubMed

    Kleindl, William J; Powell, Scott L; Hauer, F Richard

    2015-06-01

    Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

  11. A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains

    NASA Astrophysics Data System (ADS)

    Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.

    2018-02-01

    A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.

  12. Application of finite elements heterogeneous multi-scale method to eddy currents non destructive testing of carbon composites material

    NASA Astrophysics Data System (ADS)

    Khebbab, Mohamed; Feliachi, Mouloud; El Hadi Latreche, Mohamed

    2018-03-01

    In this present paper, a simulation of eddy current non-destructive testing (EC NDT) on unidirectional carbon fiber reinforced polymer is performed; for this magneto-dynamic formulation in term of magnetic vector potential is solved using finite element heterogeneous multi-scale method (FE HMM). FE HMM has as goal to compute the homogenized solution without calculating the homogenized tensor explicitly, the solution is based only on the physical characteristic known in micro domain. This feature is well adapted to EC NDT to evaluate defect in carbon composite material in microscopic scale, where the defect detection is performed by coil impedance measurement; the measurement value is intimately linked to material characteristic in microscopic level. Based on this, our model can handle different defects such as: cracks, inclusion, internal electrical conductivity changes, heterogeneities, etc. The simulation results were compared with the solution obtained with homogenized material using mixture law, a good agreement was found.

  13. Multi-scale computational modeling of developmental biology.

    PubMed

    Setty, Yaki

    2012-08-01

    Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.

  14. Self-folding and aggregation of amyloid nanofibrils

    NASA Astrophysics Data System (ADS)

    Paparcone, Raffaella; Cranford, Steven W.; Buehler, Markus J.

    2011-04-01

    Amyloids are highly organized protein filaments, rich in β-sheet secondary structures that self-assemble to form dense plaques in brain tissues affected by severe neurodegenerative disorders (e.g. Alzheimer's Disease). Identified as natural functional materials in bacteria, in addition to their remarkable mechanical properties, amyloids have also been proposed as a platform for novel biomaterials in nanotechnology applications including nanowires, liquid crystals, scaffolds and thin films. Despite recent progress in understanding amyloid structure and behavior, the latent self-assembly mechanism and the underlying adhesion forces that drive the aggregation process remain poorly understood. On the basis of previous full atomistic simulations, here we report a simple coarse-grain model to analyze the competition between adhesive forces and elastic deformation of amyloid fibrils. We use simple model system to investigate self-assembly mechanisms of fibrils, focused on the formation of self-folded nanorackets and nanorings, and thereby address a critical issue in linking the biochemical (Angstrom) to micrometre scales relevant for larger-scale states of functional amyloid materials. We investigate the effect of varying the interfibril adhesion energy on the structure and stability of self-folded nanorackets and nanorings and demonstrate that these aggregated amyloid fibrils are stable in such states even when the fibril-fibril interaction is relatively weak, given that the constituting amyloid fibril length exceeds a critical fibril length-scale of several hundred nanometres. We further present a simple approach to directly determine the interfibril adhesion strength from geometric measures. In addition to providing insight into the physics of aggregation of amyloid fibrils our model enables the analysis of large-scale amyloid plaques and presents a new method for the estimation and engineering of the adhesive forces responsible of the self-assembly process of amyloidnanostructures, filling a gap that previously existed between full atomistic simulations of primarily ultra-short fibrils and much larger micrometre-scale amyloid aggregates. Via direct simulation of large-scale amyloid aggregates consisting of hundreds of fibrils we demonstrate that the fibril length has a profound impact on their structure and mechanical properties, where the critical fibril length-scale derived from our analysis of self-folded nanorackets and nanorings defines the structure of amyloid aggregates. A multi-scale modeling approach as used here, bridging the scales from Angstroms to micrometres, opens a wide range of possible nanotechnology applications by presenting a holistic framework that balances mechanical properties of individual fibrils, hierarchical self-assembly, and the adhesive forces determining their stability to facilitate the design of de novoamyloid materials.

  15. Full particle-in-cell simulations of kinetic equilibria and the role of the initial current sheet on steady asymmetric magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Dargent, J.; Aunai, N.; Belmont, G.; Dorville, N.; Lavraud, B.; Hesse, M.

    2016-06-01

    > Tangential current sheets are ubiquitous in space plasmas and yet hard to describe with a kinetic equilibrium. In this paper, we use a semi-analytical model, the BAS model, which provides a steady ion distribution function for a tangential asymmetric current sheet and we prove that an ion kinetic equilibrium produced by this model remains steady in a fully kinetic particle-in-cell simulation even if the electron distribution function does not satisfy the time independent Vlasov equation. We then apply this equilibrium to look at the dependence of magnetic reconnection simulations on their initial conditions. We show that, as the current sheet evolves from a symmetric to an asymmetric upstream plasma, the reconnection rate is impacted and the X line and the electron flow stagnation point separate from one another and start to drift. For the simulated systems, we investigate the overall evolution of the reconnection process via the classical signatures discussed in the literature and searched in the Magnetospheric MultiScale data. We show that they seem robust and do not depend on the specific details of the internal structure of the initial current sheet.

  16. A multi-scale network method for two-phase flow in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces withinmore » each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.« less

  17. A multi-scale residual-based anti-hourglass control for compatible staggered Lagrangian hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich

    Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less

  18. Multi-time scale energy management of wind farms based on comprehensive evaluation technology

    NASA Astrophysics Data System (ADS)

    Xu, Y. P.; Huang, Y. H.; Liu, Z. J.; Wang, Y. F.; Li, Z. Y.; Guo, L.

    2017-11-01

    A novel energy management of wind farms is proposed in this paper. Firstly, a novel comprehensive evaluation system is proposed to quantify economic properties of each wind farm to make the energy management more economical and reasonable. Then, a combination of multi time-scale schedule method is proposed to develop a novel energy management. The day-ahead schedule optimizes unit commitment of thermal power generators. The intraday schedule is established to optimize power generation plan for all thermal power generating units, hydroelectric generating sets and wind power plants. At last, the power generation plan can be timely revised in the process of on-line schedule. The paper concludes with simulations conducted on a real provincial integrated energy system in northeast China. Simulation results have validated the proposed model and corresponding solving algorithms.

  19. A multi-scale residual-based anti-hourglass control for compatible staggered Lagrangian hydrodynamics

    DOE PAGES

    Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich; ...

    2017-10-28

    Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less

  20. Tunable nano-scale graphene-based devices in mid-infrared wavelengths composed of cylindrical resonators

    NASA Astrophysics Data System (ADS)

    Asgari, Somayyeh; Ghattan Kashani, Zahra; Granpayeh, Nosrat

    2018-04-01

    The performances of three optical devices including a refractive index sensor, a power splitter, and a 4-channel multi/demultiplexer based on graphene cylindrical resonators are proposed, analyzed, and simulated numerically by using the finite-difference time-domain method. The proposed sensor operates on the principle of the shift in resonance wavelength with a change in the refractive index of dielectric materials. The sensor sensitivity has been numerically derived. In addition, the performances of the power splitter and the multi/demultiplexer based on the variation of the resonance wavelengths of cylindrical resonator have been thoroughly investigated. The simulation results are in good agreement with the theoretical ones. Our studies demonstrate that the graphene based ultra-compact, nano-scale devices can be improved to be used as photonic integrated devices, optical switching, and logic gates.

  1. Nonlinear Analysis and Scaling Laws for Noncircular Composite Structures Subjected to Combined Loads

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Rose, Cheryl A.; Starnes, James H., Jr.

    2001-01-01

    Results from an analytical study of the response of a built-up, multi-cell noncircular composite structure subjected to combined internal pressure and mechanical loads are presented. Nondimensional parameters and scaling laws based on a first-order shear-deformation plate theory are derived for this noncircular composite structure. The scaling laws are used to design sub-scale structural models for predicting the structural response of a full-scale structure representative of a portion of a blended-wing-body transport aircraft. Because of the complexity of the full-scale structure, some of the similitude conditions are relaxed for the sub-scale structural models. Results from a systematic parametric study are used to determine the effects of relaxing selected similitude conditions on the sensitivity of the effectiveness of using the sub-scale structural model response characteristics for predicting the full-scale structure response characteristics.

  2. Issues and opportunities: beam simulations for heavy ion fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A

    1999-07-15

    UCRL- JC- 134975 PREPRINT code offering 3- D, axisymmetric, and ''transverse slice'' (steady flow) geometries, with a hierarchy of models for the ''lattice'' of focusing, bending, and accelerating elements. Interactive and script- driven code steering is afforded through an interpreter interface. The code runs with good parallel scaling on the T3E. Detailed simulations of machine segments and of complete small experiments, as well as simplified full- system runs, have been carried out, partially benchmarking the code. A magnetoinductive model, with module impedance and multi- beam effects, is under study. experiments, including an injector scalable to multi- beam arrays, a high-more » current beam transport and acceleration experiment, and a scaled final- focusing experiment. These ''phase I'' projects are laying the groundwork for the next major step in HIF development, the Integrated Research Experiment (IRE). Simulations aimed directly at the IRE must enable us to: design a facility with maximum power on target at minimal cost; set requirements for hardware tolerances, beam steering, etc.; and evaluate proposed chamber propagation modes. Finally, simulations must enable us to study all issues which arise in the context of a fusion driver, and must facilitate the assessment of driver options. In all of this, maximum advantage must be taken of emerging terascale computer architectures, requiring an aggressive code development effort. An organizing principle should be pursuit of the goal of integrated and detailed source- to- target simulation. methods for analysis of the beam dynamics in the various machine concepts, using moment- based methods for purposes of design, waveform synthesis, steering algorithm synthesis, etc. Three classes of discrete- particle models should be coupled: (1) electrostatic/ magnetoinductive PIC simulations should track the beams from the source through the final- focusing optics, passing details of the time- dependent distribution function to (2) electromagnetic or magnetoinductive PIC or hybrid PIG/ fluid simulations in the fusion chamber (which would finally pass their particle trajectory information to the radiation- hydrodynamics codes used for target design); in parallel, (3) detailed PIC, delta- f, core/ test- particle, and perhaps continuum Vlasov codes should be used to study individual sections of the driver and chamber very carefully; consistency may be assured by linking data from the PIC sequence, and knowledge gained may feed back into that sequence.« less

  3. BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.

    2010-12-01

    BIOMAP simulates competition between two Plant Functional Types (PFT) at any given point in the conterminous U.S. using a time series of daily temperature (mean, minimum, maximum), precipitation, humidity, light and nutrients, with PFT-specific rooting within a multi-layer soil. The model employs a 2-layer canopy biophysics, Farquhar photosynthesis, the Beer-Lambert Law for light attenuation and a mechanistic soil hydrology. In essence, BIOMAP is a re-built version of the biogeochemistry model, BIOME-BGC, into the form of the MAPSS biogeography model. Specific enhancements are: 1) the 2-layer canopy biophysics of Dolman (1993); 2) the unique MAPSS-based hydrology, which incorporates canopy evaporation, snow dynamics, infiltration and saturated and unsaturated percolation with ‘fast’ flow and base flow and a ‘tunable aquifer’ capacity, a metaphor of D’Arcy’s Law; and, 3) a unique MAPSS-based stomatal conductance algorithm, which simultaneously incorporates vapor pressure and soil water potential constraints, based on physiological information and many other improvements. Over small domains the PFTs can be parameterized as individual species to investigate fundamental vs. potential niche theory; while, at more coarse scales the PFTs can be rendered as more general functional groups. Since all of the model processes are intrinsically leaf to plot scale (physiology to PFT competition), it essentially has no ‘intrinsic’ scale and can be implemented on a grid of any size, taking on the characteristics defined by the homogeneous climate of each grid cell. Currently, the model is implemented on the VEMAP 1/2 degree, daily grid over the conterminous U.S. Although both the thermal and water-limited ecotones are dynamic, following climate variability, the PFT distributions remain fixed. Thus, the model is currently being fitted with a ‘reproduction niche’ to allow full dynamic operation as a Dynamic General Vegetation Model (DGVM). While global simulations of both climate and ecosystems must be done at coarse grid resolutions; smaller domains require higher resolution for the simulation of natural resource processes at the landscape scale and that of on-the-ground management practices. Via a combined multi-agency and private conservation effort we have implemented a Nested Scale Experiment (NeScE) that ranges from 1/2 degree resolution (global, ca. 50 km) to ca. 8km (North America) and 800 m (conterminous U.S.). Our first DGVM, MC1, has been implemented at all 3 scales. We are just beginning to implement BIOMAP into NeScE, with its unique features, and daily time step, as a counterpoint to MC1. We believe it will be more accurate at all resolutions providing better simulations of vegetation distribution, carbon balance, runoff, fire regimes and drought impacts.

  4. One-fiftieth scale model studies of 40-by 80-foot and 80-by 120-foot wind tunnel complex at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Schmidt, Gene I.; Rossow, Vernon J.; Vanaken, Johannes M.; Parrish, Cynthia L.

    1987-01-01

    The features of a 1/50-scale model of the National Full-Scale Aerodynamics Complex are first described. An overview is then given of some results from the various tests conducted with the model to aid in the design of the full-scale facility. It was found that the model tunnel simulated accurately many of the operational characteristics of the full-scale circuits. Some characteristics predicted by the model were, however, noted to differ from previous full-scale results by about 10%.

  5. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE PAGES

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  6. Evaluation of a 40 to 1 scale model of a low pressure engine

    NASA Technical Reports Server (NTRS)

    Cooper, C. E., Jr.; Thoenes, J.

    1972-01-01

    An evaluation of a scale model of a low pressure rocket engine which is used for secondary injection studies was conducted. Specific objectives of the evaluation were to: (1) assess the test conditions required for full scale simulations; (2) recommend fluids to be used for both primary and secondary flows; and (3) recommend possible modifications to be made to the scale model and its test facility to achieve the highest possible degree of simulation. A discussion of the theoretical and empirical scaling laws which must be observed to apply scale model test data to full scale systems is included. A technique by which the side forces due to secondary injection can be analytically estimated is presented.

  7. Coaches’ Perceptions of Competence and Acknowledgement of Training Needs Related to Professional Competences

    PubMed Central

    Santos, Sofia; Mesquita, Isabel; GRAÇA, Amândio; Rosado, António

    2010-01-01

    The purpose of the present study was to examine coaches’ perceptions of competence and acknowledgement of training needs related to professional competences according to the professional experience and academic education. The participants were 343 coaches from several sports, who answered to a questionnaire that includes a scale focused on perceptions of competence and another scale on acknowledgment of training needs. An exploratory factor analysis with Maximum Likelihood Factoring was used with Oblimin rotation for the identification of emergent factors. Comparison on coaches’ perceptions in function of coaching experience and coaches’ academic background were made applying One-way ANOVA and Tukey’s post hoc multiple comparisons. Factor analysis on coaches’ perceptions of competence and acknowledgement of training needs made apparent three main areas of competences, i.e. competences related to annual and multi-annual planning; competences related to orientation towards practice and competition; and personal and coaching education competences. Coaches’ perceptions were influenced by their experience, as low experienced coaches rated themselves at lower levels of competence and with more training needs; also coaches with high education, in Physical Education or others, perceived themselves as more competent than coaches with no higher education. Finally, the majority of the coaches perceived themselves to be competent but, nevertheless, they indicated to have training needs, which brings an important feedback to coach education. This suggests that coaches are interested in increasing their knowledge and competence in a broad range of areas which should be considered in future coach education programs. Key points Coaches’ perceptions of competence and acknowledgement of training needs resulted in three main areas: competences related to annual and multi-annual planning, competences related to practice and competition orientation and, finally, personal and coaching education competences. The professional tasks that coaches had the most need in performing were related to the training orientation. Coaches with higher education degrees (P.E. or others) perceive themselves as more competent than coaches with no higher education. Low experienced coaches perceived themselves less competent than high experienced coaches. Also, they pointed out more training needs in issues related to practice and competition orientation, and annual and multi-annual planning. PMID:24149387

  8. Multi-Scale Simulation of High Energy Density Ionic Liquids

    DTIC Science & Technology

    2007-06-19

    and simulation of ionic liquids (ILs). A polarizable model was developed to simulate ILs more accurately at the atomistic level. A multiscale coarse...propellant, 1- hydroxyethyl-4-amino-1, 2, 4-triazolium nitrate (HEATN), were studied with the all-atom polarizable model. The mechanism suggested for HEATN...with this AFOSR-supported project, a polarizable forcefield for the ionic liquids such as 1-ethyl-3-methylimidazolium nitrate (EMIM*/NO3-) was

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merzari, E.; Yuan, Haomin; Kraus, A.

    The NEAMS program aims to develop an integrated multi-physics simulation capability “pellet-to-plant” for the design and analysis of future generations of nuclear power plants. In particular, the Reactor Product Line code suite's multi-resolution hierarchy is being designed to ultimately span the full range of length and time scales present in relevant reactor design and safety analyses, as well as scale from desktop to petaflop computing platforms. Flow-induced vibration (FIV) is widespread problem in energy systems because they rely on fluid movement for energy conversion. Vibrating structures may be damaged as fatigue or wear occurs. Given the importance of reliable componentsmore » in the nuclear industry, flow-induced vibration has long been a major concern in safety and operation of nuclear reactors. In particular, nuclear fuel rods and steam generators have been known to suffer from flow-induced vibration and related failures. Advanced reactors, such as integral Pressurized Water Reactors (PWRs) considered for Small Modular Reactors (SMR), often rely on innovative component designs to meet cost and safety targets. One component that is the subject of advanced designs is the steam generator, some designs of which forego the usual shell-and-tube architecture in order to fit within the primary vessel. In addition to being more cost- and space-efficient, such steam generators need to be more reliable, since failure of the primary vessel represents a potential loss of coolant and a safety concern. A significant amount of data exists on flow-induced vibration in shell-and-tube heat exchangers, and heuristic methods are available to predict their occurrence based on a set of given assumptions. In contrast, advanced designs have far less data available. Advanced modeling and simulation based on coupled structural and fluid simulations have the potential to predict flow-induced vibration in a variety of designs, reducing the need for expensive experimental programs, especially at the design stage. Over the past five years, the Reactor Product Line has developed the integrated multi-physics code suite SHARP. The goal of developing such a tool is to perform multi-physics neutronics, thermal/fluid, and structural mechanics modeling of the components inside the full reactor core or portions of it with a user-specified fidelity. In particular SHARP contains high-fidelity single-physics codes Diablo for structural mechanics and Nek5000 for fluid mechanics calculations. Both codes are state-of-the-art, highly scalable tools that have been extensively validated. These tools form a strong basis on which to build a flow-induced vibration modeling capability. In this report we discuss one-way coupled calculations performed with Nek5000 and Diablo aimed at simulating available FIV experiments in helical steam generators in the turbulent buffeting regime. In this regime one-way coupling is judged sufficient because the pressure loads do not cause substantial displacements. It is also the most common source of vibration in helical steam generators at the low flows expected in integral PWRs. The legacy data is obtained from two datasets developed at Argonne and B&W.« less

  10. Numerical Modelling of Tsunami Generated by Deformable Submarine Slides: Parameterisation of Slide Dynamics for Coupling to Tsunami Propagation Model

    NASA Astrophysics Data System (ADS)

    Smith, R. C.; Collins, G. S.; Hill, J.; Piggott, M. D.; Mouradian, S. L.

    2015-12-01

    Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.

  11. Timing and Mode of Landscape Response to Glacial-Interglacial Climate Forcing From Fluvial Fill Terrace Sediments: Humahuaca Basin, E Cordillera, NW Argentina

    NASA Astrophysics Data System (ADS)

    Schildgen, T. F.; Robinson, R. A. J.; Savi, S.; Bookhagen, B.; Tofelde, S.; Strecker, M. R.

    2014-12-01

    Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.

  12. Multi-phase models for water and thermal management of proton exchange membrane fuel cell: A review

    NASA Astrophysics Data System (ADS)

    Zhang, Guobin; Jiao, Kui

    2018-07-01

    The 3D (three-dimensional) multi-phase CFD (computational fluid dynamics) model is widely utilized in optimizing water and thermal management of PEM (proton exchange membrane) fuel cell. However, a satisfactory 3D multi-phase CFD model which is able to simulate the detailed gas and liquid two-phase flow in channels and reflect its effect on performance precisely is still not developed due to the coupling difficulties and computation amount. Meanwhile, the agglomerate model of CL (catalyst layer) should also be added in 3D CFD model so as to better reflect the concentration loss and optimize CL structure in macroscopic scale. Besides, the effect of thermal management is perhaps underestimated in current 3D multi-phase CFD simulations due to the lack of coolant channel in computation domain and constant temperature boundary condition. Therefore, the 3D CFD simulations in cell and stack levels with convection boundary condition are suggested to simulate the water and thermal management more accurately. Nevertheless, with the rapid development of PEM fuel cell, current 3D CFD simulations are far from practical demand, especially at high current density and low to zero humidity and for the novel designs developed recently, such as: metal foam flow field, 3D fine mesh flow field, anode circulation etc.

  13. Development of an inter-professional educational program for home care professionals: Evaluation of short-term effects in suburban areas.

    PubMed

    Tsuchiya, Rumiko; Yoshie, Satoru; Kawagoe, Shohei; Hirahara, Satoshi; Onishi, Hirotaka; Murayama, Hiroshi; Nishinaga, Masanori; Iijima, Katsuya; Tsuji, Tetsuo

    2017-01-01

    Objective To examine the short-term effects of an inter-professional educational program developed for physicians and other home care specialists to promote home care in the community.Methods From March 2012 to January 2013, an inter-professional educational program (IEP) was held four times in three suburban areas (Kashiwa city and Matsudo city in the Chiba prefecture, and Omori district in the Ota ward). This program aimed to motivate physicians to increase the number of home visits and to encourage home care professionals to work together in the same community areas by promoting inter-professional work (IPW). The participants were physicians, home-visit nurses, and other home care professionals recommended by community-level professional associations. The participants attended a 1.5-day multi-professional IEP. Pre- and post-program questionnaires were used to collect information on home care knowledge and practical skills (26 indexes, 1-4 scale), attitudes toward home care practice (4 indexes, 1-6 scale), and IPW (13 indexes, 1-4 scale). Data from all of the participants without labels about the type of professionals were excluded, and both pre-test and post-test responses were used in the analysis. A Wilcoxon signed-rank test and a paired t-test were conducted to compare pre- and post-program questionnaire responses stratified for physicians and other professionals, and the effect size was calculated.Results The total number of participants for the four programs was 256, and data from 162 (63.3%) were analyzed. The physicians numbered 19 (11.7%), while other professionals numbered 143 (88.3%). Attending this program helped participants obtain home care knowledge of IPW and a practical view of home care. Furthermore, indexes about IPW consisted of two factors: cooperation and interaction; non-physician home care professionals increased their interactions with physicians, other professionals increased their cooperation with other professionals, and physicians increased their cooperation with other physicians.Conclusion Short-term effects to motivate physicians to increase home visits were limited. However, physicians obtained a practical view of home care by attending the IEP. Also, the participation of physicians and other home care professionals in this program triggered the beginning of IPW in suburban areas. This program is feasible when adapted for regional differences.

  14. Gyrokinetic simulations of DIII-D near-edge L-mode plasmas

    NASA Astrophysics Data System (ADS)

    Neiser, Tom; Jenko, Frank; Carter, Troy; Schmitz, Lothar; Merlo, Gabriele; Told, Daniel; Banon Navarro, Alejandro; McKee, George; Yan, Zheng

    2017-10-01

    In order to understand the L-H transition, a good understanding of the L-mode edge region is necessary. We perform nonlinear gyrokinetic simulations of a DIII-D L-mode discharge with the GENE code in the near-edge, which we define as ρtor >= 0.8 . At ρ = 0.9 , ion-scale simulations reproduce experimental heat fluxes within the uncertainty of the experiment. At ρ = 0 . 8 , electron-scale simulations reproduce the experimental electron heat flux while ion-scale simulations do not reproduce the respective ion heat flux due to a strong poloidal zonal flow. However, we reproduce both electron and ion heat fluxes by increasing the local ion temperature gradient by 80 % . Local fitting to the CER data in the domain 0.7 <= ρ <= 0.9 is compatible with such an increase in ion temperature gradient within the error bars. Ongoing multi-scale simulations are investigating whether radial electron streamers could dampen the poloidal zonal flows at ρ = 0.8 and increase the radial ion-scale flux. Supported by U.S. DOE under Contract Numbers DE-FG02-08ER54984, DE-FC02-04ER54698, and DE-AC02-05CH11231.

  15. A Novel Probabilistic Multi-Scale Modeling and Sensing Framework for Fatigue Life Prediction of Aerospace Structures and Materials: DCT Project

    DTIC Science & Technology

    2012-08-25

    Accel- erated Crystal Plasticity FEM Simulations (submitted). 5. M. Anahid, M. Samal and S. Ghosh, Dwell fatigue crack nucleation model based on using...4] M. Anahid, M. K. Samal , and S. Ghosh. Dwell fatigue crack nucleation model based on crystal plasticity finite element simulations of

  16. MouseNet database: digital management of a large-scale mutagenesis project.

    PubMed

    Pargent, W; Heffner, S; Schäble, K F; Soewarto, D; Fuchs, H; Hrabé de Angelis, M

    2000-07-01

    The Munich ENU Mouse Mutagenesis Screen is a large-scale mutant production, phenotyping, and mapping project. It encompasses two animal breeding facilities and a number of screening groups located in the general area of Munich. A central database is required to manage and process the immense amount of data generated by the mutagenesis project. This database, which we named MouseNet(c), runs on a Sybase platform and will finally store and process all data from the entire project. In addition, the system comprises a portfolio of functions needed to support the workflow management of the core facility and the screening groups. MouseNet(c) will make all of the data available to the participating screening groups, and later to the international scientific community. MouseNet(c) will consist of three major software components:* Animal Management System (AMS)* Sample Tracking System (STS)* Result Documentation System (RDS)MouseNet(c) provides the following major advantages:* being accessible from different client platforms via the Internet* being a full-featured multi-user system (including access restriction and data locking mechanisms)* relying on a professional RDBMS (relational database management system) which runs on a UNIX server platform* supplying workflow functions and a variety of plausibility checks.

  17. Modeling multi-scale aerosol dynamics and micro-environmental air quality near a large highway intersection using the CTAG model.

    PubMed

    Wang, Yan Jason; Nguyen, Monica T; Steffens, Jonathan T; Tong, Zheming; Wang, Yungang; Hopke, Philip K; Zhang, K Max

    2013-01-15

    A new methodology, referred to as the multi-scale structure, integrates "tailpipe-to-road" (i.e., on-road domain) and "road-to-ambient" (i.e., near-road domain) simulations to elucidate the environmental impacts of particulate emissions from traffic sources. The multi-scale structure is implemented in the CTAG model to 1) generate process-based on-road emission rates of ultrafine particles (UFPs) by explicitly simulating the effects of exhaust properties, traffic conditions, and meteorological conditions and 2) to characterize the impacts of traffic-related emissions on micro-environmental air quality near a highway intersection in Rochester, NY. The performance of CTAG, evaluated against with the field measurements, shows adequate agreement in capturing the dispersion of carbon monoxide (CO) and the number concentrations of UFPs in the near road micro-environment. As a proof-of-concept case study, we also apply CTAG to separate the relative impacts of the shutdown of a large coal-fired power plant (CFPP) and the adoption of the ultra-low-sulfur diesel (ULSD) on UFP concentrations in the intersection micro-environment. Although CTAG is still computationally expensive compared to the widely-used parameterized dispersion models, it has the potential to advance our capability to predict the impacts of UFP emissions and spatial/temporal variations of air pollutants in complex environments. Furthermore, for the on-road simulations, CTAG can serve as a process-based emission model; Combining the on-road and near-road simulations, CTAG becomes a "plume-in-grid" model for mobile emissions. The processed emission profiles can potentially improve regional air quality and climate predictions accordingly. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Deformation and Failure of a Multi-Wall Carbon Nanotube Yarn Composite

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Jefferson, Gail D.; Frankland, Sarah-Jane V.

    2008-01-01

    Forests of multi-walled carbon nanotubes can be twisted and manipulated into continuous fibers or yarns that exhibit many of the characteristics of traditional textiles. Macro-scale analysis and test may provide strength and stiffness predictions for a composite composed of a polymer matrix and low-volume fraction yarns. However, due to the nano-scale of the carbon nanotubes, it is desirable to use atomistic calculations to consider tube-tube interactions and the influence of simulated twist on the effective friction coefficient. This paper reports laboratory test data on the mechanical response of a multi-walled, carbon nanotube yarn/polymer composite from both dynamic and quasi-static tensile tests. Macroscale and nano-scale analysis methods are explored and used to define some of the key structure-property relationships. The measured influence of hot-wet aging on the tensile properties is also reported.

  19. Users matter : multi-agent systems model of high performance computing cluster users.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Hood, C. S.; Decision and Information Sciences

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less

  20. Experimental techniques and computational methods toward the estimation of the effective two-phase flow coefficients and multi-scale heterogeneities of soils

    NASA Astrophysics Data System (ADS)

    Tsakiroglou, C. D.; Aggelopoulos, C. A.; Sygouni, V.

    2009-04-01

    A hierarchical, network-type, dynamic simulator of the immiscible displacement of water by oil in heterogeneous porous media is developed to simulate the rate-controlled displacement of two fluids at the soil column scale. A cubic network is constructed, where each node is assigned a permeability which is chosen randomly from a distribution function. The intensity of heterogeneities is quantified by the width of the permeability distribution function. The capillary pressure at each node is calculated by combining a generalized Leverett J-function with a Corey type model. Information about the heterogeneity of soils at the pore network scale is obtained by combining mercury intrusion porosimetry (MIP) data with back-scattered scanning electron microscope (BSEM) images [1]. In order to estimate the two-phase flow properties of nodes (relative permeability and capillary pressure functions, permeability distribution function) immiscible and miscible displacement experiments are performed on undisturbed soil columns. The transient responses of measured variables (pressure drop, fluid saturation averaged over five successive segments, solute concentration averaged over three cross-sections) are fitted with models accounting for the preferential flow paths at the micro- (multi-region model) and macro-scale (multi flowpath model) because of multi-scale heterogeneities [2,3]. Simulating the immiscible displacement of water by oil (drainage) in a large netork, at each time step, the fluid saturation and pressure of each node are calculated formulating mass balances at each node, accounting for capillary, viscous and gravity forces, and solving the system of coupled equations. At each iteration of the algorithm, the pressure drop is so selected that the total flow rate of the injected fluid is kept constant. The dynamic large-scale network simulator is used (1) to examine the sensitivity of the transient responses of the axial distribution of fluid saturation and total pressure drop across the network to the permeability distribution function, spatial correlations of permeability, and capillary number, and (2) to estimate the effective (up-scaled) relative permeability functions at the soil column scale. In an attempt to clarify potential effects of the permeability distribution and spatial permeability correlations on the transient responses of the pressure drop across a soil column, signal analysis with wavelets is performed [4] on experimental and simulated results. The transient variation of signal energy and frequency of pressure drop fluctuations at the wavelet domain are correlated with macroscopic properties such as the effective water and oil relative permeabilities of the porous medium, and microscopic properties such as the variation of the permeability distribution of oil-occupied nodes. Toward the solution of the inverse problem, a general procedure is suggested to identify macro-heterogeneities from the fast analysis of pressure drop signals. References 1. Tsakiroglou, C.D. and M.A. Ioannidis, "Dual porosity modeling of the pore structure and transport properties of a contaminated soil", Eur. J. Soil Sci., 59, 744-761 (2008). 2. Aggelopoulos, C.A., and C.D. Tsakiroglou, "Quantifying the Soil Heterogeneity from Solute Dispersion Experiments", Geoderma, 146, 412-424 (2008). 3. Aggelopoulos, C.A., and C.D. Tsakiroglou, "A multi-flow path approach to model immiscible displacement in undisturbed heterogeneous soil columns", J. Contam. Hydrol., in press (2009). 4. Sygouni, V., C.D. Tsakiroglou, and A.C. Payatakes, "Using wavelets to characterize the wettability of porous materials", Phys. Rev. E, 76, 056304 (2007).

  1. Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures

    NASA Astrophysics Data System (ADS)

    Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi

    2017-04-01

    Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.

  2. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  3. Multi-material 3D Models for Temporal Bone Surgical Simulation.

    PubMed

    Rose, Austin S; Kimbell, Julia S; Webster, Caroline E; Harrysson, Ola L A; Formeister, Eric J; Buchman, Craig A

    2015-07-01

    A simulated, multicolor, multi-material temporal bone model can be created using 3-dimensional (3D) printing that will prove both safe and beneficial in training for actual temporal bone surgical cases. As the process of additive manufacturing, or 3D printing, has become more practical and affordable, a number of applications for the technology in the field of Otolaryngology-Head and Neck Surgery have been considered. One area of promise is temporal bone surgical simulation. Three-dimensional representations of human temporal bones were created from temporal bone computed tomography (CT) scans using biomedical image processing software. Multi-material models were then printed and dissected in a temporal bone laboratory by attending and resident otolaryngologists. A 5-point Likert scale was used to grade the models for their anatomical accuracy and suitability as a simulation of cadaveric and operative temporal bone drilling. The models produced for this study demonstrate significant anatomic detail and a likeness to human cadaver specimens for drilling and dissection. Simulated temporal bones created by this process have potential benefit in surgical training, preoperative simulation for challenging otologic cases, and the standardized testing of temporal bone surgical skills. © The Author(s) 2015.

  4. Development of the Human Factors Skills for Healthcare Instrument: a valid and reliable tool for assessing interprofessional learning across healthcare practice settings.

    PubMed

    Reedy, Gabriel B; Lavelle, Mary; Simpson, Thomas; Anderson, Janet E

    2017-10-01

    A central feature of clinical simulation training is human factors skills, providing staff with the social and cognitive skills to cope with demanding clinical situations. Although these skills are critical to safe patient care, assessing their learning is challenging. This study aimed to develop, pilot and evaluate a valid and reliable structured instrument to assess human factors skills, which can be used pre- and post-simulation training, and is relevant across a range of healthcare professions. Through consultation with a multi-professional expert group, we developed and piloted a 39-item survey with 272 healthcare professionals attending training courses across two large simulation centres in London, one specialising in acute care and one in mental health, both serving healthcare professionals working across acute and community settings. Following psychometric evaluation, the final 12-item instrument was evaluated with a second sample of 711 trainees. Exploratory factor analysis revealed a 12-item, one-factor solution with good internal consistency (α=0.92). The instrument had discriminant validity, with newly qualified trainees scoring significantly lower than experienced trainees ( t (98)=4.88, p<0.001) and was sensitive to change following training in acute and mental health settings, across professional groups (p<0.001). Confirmatory factor analysis revealed an adequate model fit (RMSEA=0.066). The Human Factors Skills for Healthcare Instrument provides a reliable and valid method of assessing trainees' human factors skills self-efficacy across acute and mental health settings. This instrument has the potential to improve the assessment and evaluation of human factors skills learning in both uniprofessional and interprofessional clinical simulation training.

  5. Large-Scale Simulation of Multi-Asset Ising Financial Markets

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2017-03-01

    We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.

  6. Toward GEOS-6, A Global Cloud System Resolving Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putman, William M.

    2010-01-01

    NASA is committed to observing and understanding the weather and climate of our home planet through the use of multi-scale modeling systems and space-based observations. Global climate models have evolved to take advantage of the influx of multi- and many-core computing technologies and the availability of large clusters of multi-core microprocessors. GEOS-6 is a next-generation cloud system resolving atmospheric model that will place NASA at the forefront of scientific exploration of our atmosphere and climate. Model simulations with GEOS-6 will produce a realistic representation of our atmosphere on the scale of typical satellite observations, bringing a visual comprehension of model results to a new level among the climate enthusiasts. In preparation for GEOS-6, the agency's flagship Earth System Modeling Framework [JDl] has been enhanced to support cutting-edge high-resolution global climate and weather simulations. Improvements include a cubed-sphere grid that exposes parallelism; a non-hydrostatic finite volume dynamical core, and algorithm designed for co-processor technologies, among others. GEOS-6 represents a fundamental advancement in the capability of global Earth system models. The ability to directly compare global simulations at the resolution of spaceborne satellite images will lead to algorithm improvements and better utilization of space-based observations within the GOES data assimilation system

  7. Fast Decentralized Averaging via Multi-scale Gossip

    NASA Astrophysics Data System (ADS)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  8. Simulation in interprofessional education for patient-centred collaborative care.

    PubMed

    Baker, Cynthia; Pulling, Cheryl; McGraw, Robert; Dagnone, Jeffrey Damon; Hopkins-Rosseel, Diana; Medves, Jennifer

    2008-11-01

    This paper is a report of preliminary evaluations of an interprofessional education through simulation project by focusing on learner and teacher reactions to the pilot modules. Approaches to interprofessional education vary widely. Studies indicate, however, that active, experiential learning facilitate it. Patient simulators require learners to incorporate knowing, being and doing in action. A theoretically based competency framework was developed to guide interprofessional education using simulation. The framework includes a typology of shared, complementary and profession-specific competencies. Each competency type is associated with an intraprofessional, multiprofessional, or interprofessional teaching modality and with the professional composition of learner groups. The project is guided by an action research approach in which ongoing evaluation generates knowledge to modify and further develop it. Preliminary evaluations of the first pilot module, cardiac resuscitation rounds, among 101 nursing students, 42 medical students and 70 junior medical residents were conducted in 2005-2007 using a questionnaire with rating scales and open-ended questions. Another 20 medical students, 7 junior residents and 45 nursing students completed a questionnaire based on the Interdisciplinary Education Perception scale. Simulation-based learning provided students with interprofessional activities they saw as relevant for their future as practitioners. They embraced both the interprofessional and simulation components enthusiastically. Attitudinal scores and responses were consistently positive among both medical and nursing students. Interprofessional education through simulation offers a promising approach to preparing future healthcare professionals for the collaborative models of healthcare delivery being developed internationally.

  9. SEASONAL MODELING OF THE EXPORT OF POLLUTANTS FROM NORTH AMERICA USING THE MULTI-SCALE AIR QUALITY SIMULATION PLATFORM (MAQSIP)

    EPA Science Inventory

    Attention in recent years has focused on the trans-boundary transport of ozone and fine particulate matte between the United States and Mexico and Canada and across state boundaries in the United States. In a similar manner, but on a larger spatial scale, the export of pollutant...

  10. Multi-scale evaluation of the environmental controls on burn probability in a southern Sierra Nevada landscape

    Treesearch

    Sean A. Parks; Marc-Andre Parisien; Carol Miller

    2011-01-01

    We examined the scale-dependent relationship between spatial fire likelihood or burn probability (BP) and some key environmental controls in the southern Sierra Nevada, California, USA. Continuous BP estimates were generated using a fire simulation model. The correspondence between BP (dependent variable) and elevation, ignition density, fuels and aspect was evaluated...

  11. Overview of the NASA Subsonic Rotary Wing Aeronautics Research Program in Rotorcraft Crashworthiness

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Kellas, Sotiris; Fuchs, Yvonne T.

    2009-01-01

    This paper provides an overview of rotorcraft crashworthiness research being conducted at NASA Langley Research Center under sponsorship of the Subsonic Rotary Wing (SRW) Aeronautics Program. The research is focused in two areas: development of an externally deployable energy attenuating concept and improved prediction of rotorcraft crashworthiness. The deployable energy absorber (DEA) is a composite honeycomb structure, with a unique flexible hinge design that allows the honeycomb to be packaged and remain flat until needed for deployment. The capabilities of the DEA have been demonstrated through component crush tests and vertical drop tests of a retrofitted fuselage section onto different surfaces or terrain. The research on improved prediction of rotorcraft crashworthiness is focused in several areas including simulating occupant responses and injury risk assessment, predicting multi-terrain impact, and utilizing probabilistic analysis methods. A final task is to perform a system-integrated simulation of a full-scale helicopter crash test onto a rigid surface. A brief description of each research task is provided along with a summary of recent accomplishments.

  12. Overview of the NASA Subsonic Rotary Wing Aeronautics Research Program in Rotorcraft Crashworthiness

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fuchs, Yvonne T.; Kellas, Sotiris

    2008-01-01

    This paper provides an overview of rotorcraft crashworthiness research being conducted at NASA Langley Research Center under sponsorship of the Subsonic Rotary Wing (SRW) Aeronautics Program. The research is focused in two areas: development of an externally deployable energy attenuating concept and improved prediction of rotorcraft crashworthiness. The deployable energy absorber (DEA) is a composite honeycomb structure, with a unique flexible hinge design that allows the honeycomb to be packaged and remain flat until needed for deployment. The capabilities of the DEA have been demonstrated through component crush tests and vertical drop tests of a retrofitted fuselage section onto different surfaces or terrain. The research on improved prediction of rotorcraft crashworthiness is focused in several areas including simulating occupant responses and injury risk assessment, predicting multi-terrain impact, and utilizing probabilistic analysis methods. A final task is to perform a system-integrated simulation of a full-scale helicopter crash test onto a rigid surface. A brief description of each research task is provided along with a summary of recent accomplishments.

  13. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE PAGES

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  14. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    PubMed

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. RANS Simulation (Virtual Blade Model [VBM]) of Array of Three Coaxial Lab Scaled DOE RM1 MHK Turbine with 5D Spacing

    DOE Data Explorer

    Javaherchi, Teymour

    2016-06-08

    Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for the Reynolds Averaged Navier-Stokes (RANS) simulation of three coaxially located lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbines in a coaxial array is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of each device and structure of their turbulent far wake. The results of these simulations were validated against the developed in-house experimental data. Simulations for other turbine configurations are available upon request.

  16. Multi-scale enhancement of climate prediction over land by improving the model sensitivity to vegetation variability

    NASA Astrophysics Data System (ADS)

    Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2017-12-01

    Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.Above results are discussed in a peer-review paper just being accepted for publication on Climate Dynamics (Alessandri et al., 2017; doi:10.1007/s00382-017-3766-y).

  17. Full-Scale Numerical Modeling of Turbulent Processes in the Earth's Ionosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliasson, B.; Stenflo, L.; Department of Physics, Linkoeping University, SE-581 83 Linkoeping

    2008-10-15

    We present a full-scale simulation study of ionospheric turbulence by means of a generalized Zakharov model based on the separation of variables into high-frequency and slow time scales. The model includes realistic length scales of the ionospheric profile and of the electromagnetic and electrostatic fields, and uses ionospheric plasma parameters relevant for high-latitude radio facilities such as Eiscat and HAARP. A nested grid numerical method has been developed to resolve the different length-scales, while avoiding severe restrictions on the time step. The simulation demonstrates the parametric decay of the ordinary mode into Langmuir and ion-acoustic waves, followed by a Langmuirmore » wave collapse and short-scale caviton formation, as observed in ionospheric heating experiments.« less

  18. FULL-SCALE TESTS OF THE MULTI-CHAMBERED TREATMENT TANK (MCTT)

    EPA Science Inventory

    The MCTT was developed to control toxicants in stormwater from critical source areas. During monitoring, the pilot-scale MCTT provided median reductions of >90% for toxicity, lead, zinc, and most organic toxicants. Suspended solids was reduced by 83% and COD was reduced by 60%. T...

  19. Three-dimensional multi-scale model of deformable platelets adhesion to vessel wall in blood flow

    PubMed Central

    Wu, Ziheng; Xu, Zhiliang; Kim, Oleg; Alber, Mark

    2014-01-01

    When a blood vessel ruptures or gets inflamed, the human body responds by rapidly forming a clot to restrict the loss of blood. Platelets aggregation at the injury site of the blood vessel occurring via platelet–platelet adhesion, tethering and rolling on the injured endothelium is a critical initial step in blood clot formation. A novel three-dimensional multi-scale model is introduced and used in this paper to simulate receptor-mediated adhesion of deformable platelets at the site of vascular injury under different shear rates of blood flow. The novelty of the model is based on a new approach of coupling submodels at three biological scales crucial for the early clot formation: novel hybrid cell membrane submodel to represent physiological elastic properties of a platelet, stochastic receptor–ligand binding submodel to describe cell adhesion kinetics and lattice Boltzmann submodel for simulating blood flow. The model implementation on the GPU cluster significantly improved simulation performance. Predictive model simulations revealed that platelet deformation, interactions between platelets in the vicinity of the vessel wall as well as the number of functional GPIbα platelet receptors played significant roles in platelet adhesion to the injury site. Variation of the number of functional GPIbα platelet receptors as well as changes of platelet stiffness can represent effects of specific drugs reducing or enhancing platelet activity. Therefore, predictive simulations can improve the search for new drug targets and help to make treatment of thrombosis patient-specific. PMID:24982253

  20. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  1. Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders

    USGS Publications Warehouse

    Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael

    2015-01-01

    Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.

  2. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Yao, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu; Song, Jeong-Hoon, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu

    2014-08-07

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifiesmore » the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.« less

  3. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  4. Simulating multiprimary LCDs on standard tri-stimulus LC displays

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz; Vonneilich, Katrin; Bonse, Thomas

    2008-01-01

    Large-scale, direct view TV screens, in particular those based on liquid crystal technology, are beginning to use subpixel structures with more than three subpixels to implement a multi-primary display with up to six primaries. Since their input color space is likely to remain tri-stimulus RGB we first focus on some fundamental constraints. Among them, we elaborate simplified gamut mapping architectures as well as color filter geometry, transparency, and chromaticity coordinates in color space. Based on a 'display centric' RGB color space tetrahedrization combined with linear interpolation we describe a simulation framework which enables optimization for up to 7 primaries. We evaluated the performance through mapping the multi-primary design back onto a RGB LC display gamut without building a prototype multi-primary display. As long as we kept the RGB equivalent output signal within the display gamut we could analyze all desirable multi-primary configurations with regard to colorimetric variance and visually perceived quality. Not only does our simulation tool enable us to verify a novel concept it also demonstrates how carefully one needs to design a multiprimary display for LCD TV applications.

  5. Assessing self-care and social function using a computer adaptive testing version of the Pediatric Evaluation of Disability Inventory Accepted for Publication, Archives of Physical Medicine and Rehabilitation

    PubMed Central

    Coster, Wendy J.; Haley, Stephen M.; Ni, Pengsheng; Dumas, Helene M.; Fragala-Pinkham, Maria A.

    2009-01-01

    Objective To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the Self-Care and Social Function scales of the Pediatric Evaluation of Disability Inventory (PEDI) compared to the full-length version of these scales. Design Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Settings Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children’s homes. Participants Four hundred sixty-nine children with disabilities and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Interventions Not applicable. Main Outcome Measures Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length Self-Care and Social Function scales; time (in seconds) to complete assessments and respondent ratings of burden. Results Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (all r’s between .94 and .99). Using computer simulation of retrospective data, discriminant validity and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared to over 16 minutes to complete the full-length scales. Conclusions Self-care and Social Function score estimates from CAT administration are highly comparable to those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time. PMID:18373991

  6. Scientific Discovery through Advanced Computing in Plasma Science

    NASA Astrophysics Data System (ADS)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.

  7. Recent Advances in Transferable Coarse-Grained Modeling of Proteins

    PubMed Central

    Kar, Parimal; Feig, Michael

    2017-01-01

    Computer simulations are indispensable tools for studying the structure and dynamics of biological macromolecules. Biochemical processes occur on different scales of length and time. Atomistic simulations cannot cover the relevant spatiotemporal scales at which the cellular processes occur. To address this challenge, coarse-grained (CG) modeling of the biological systems are employed. Over the last few years, many CG models for proteins continue to be developed. However, many of them are not transferable with respect to different systems and different environments. In this review, we discuss those CG protein models that are transferable and that retain chemical specificity. We restrict ourselves to CG models of soluble proteins only. We also briefly review recent progress made in the multi-scale hybrid all-atom/coarse-grained simulations of proteins. PMID:25443957

  8. Aerosol-cloud interactions in a multi-scale modeling framework

    NASA Astrophysics Data System (ADS)

    Lin, G.; Ghan, S. J.

    2017-12-01

    Atmospheric aerosols play an important role in changing the Earth's climate through scattering/absorbing solar and terrestrial radiation and interacting with clouds. However, quantification of the aerosol effects remains one of the most uncertain aspects of current and future climate projection. Much of the uncertainty results from the multi-scale nature of aerosol-cloud interactions, which is very challenging to represent in traditional global climate models (GCMs). In contrast, the multi-scale modeling framework (MMF) provides a viable solution, which explicitly resolves the cloud/precipitation in the cloud resolved model (CRM) embedded in the GCM grid column. In the MMF version of community atmospheric model version 5 (CAM5), aerosol processes are treated with a parameterization, called the Explicit Clouds Parameterized Pollutants (ECPP). It uses the cloud/precipitation statistics derived from the CRM to treat the cloud processing of aerosols on the GCM grid. However, this treatment treats clouds on the CRM grid but aerosols on the GCM grid, which is inconsistent with the reality that cloud-aerosol interactions occur on the cloud scale. To overcome the limitation, here, we propose a new aerosol treatment in the MMF: Explicit Clouds Explicit Aerosols (ECEP), in which we resolve both clouds and aerosols explicitly on the CRM grid. We first applied the MMF with ECPP to the Accelerated Climate Modeling for Energy (ACME) model to have an MMF version of ACME. Further, we also developed an alternative version of ACME-MMF with ECEP. Based on these two models, we have conducted two simulations: one with the ECPP and the other with ECEP. Preliminary results showed that the ECEP simulations tend to predict higher aerosol concentrations than ECPP simulations, because of the more efficient vertical transport from the surface to the higher atmosphere but the less efficient wet removal. We also found that the cloud droplet number concentrations are also different between the two simulations due to the difference in the cloud droplet lifetime. Next, we will explore how the ECEP treatment affects the anthropogenic aerosol forcing, particularly the aerosol indirect forcing, by comparing present-day and pre-industrial simulations.

  9. Resolving the Multi-scale Behavior of Geochemical Weathering in the Critical Zone Using High Resolution Hydro-geochemical Models

    NASA Astrophysics Data System (ADS)

    Pandey, S.; Rajaram, H.

    2015-12-01

    This work investigates hydrologic and geochemical interactions in the Critical Zone (CZ) using high-resolution reactive transport modeling. Reactive transport models can be used to predict the response of geochemical weathering and solute fluxes in the CZ to changes in a dynamic environment, such as those pertaining to human activities and climate change in recent years. The scales of hydrology and geochemistry in the CZ range from days to eons in time and centimeters to kilometers in space. Here, we present results of a multi-dimensional, multi-scale hydro-geochemical model to investigate the role of subsurface heterogeneity on the formation of mineral weathering fronts in the CZ, which requires consideration of many of these spatio-temporal scales. The model is implemented using the reactive transport code PFLOTRAN, an open source subsurface flow and reactive transport code that utilizes parallelization over multiple processing nodes and provides a strong framework for simulating weathering in the CZ. The model is set up to simulate weathering dynamics in the mountainous catchments representative of the Colorado Front Range. Model parameters were constrained based on hydrologic, geochemical, and geophysical observations from the Boulder Creek Critical Zone Observatory (BcCZO). Simulations were performed in fractured rock systems and compared with systems of heterogeneous and homogeneous permeability fields. Tracer simulations revealed that the mean residence time of solutes was drastically accelerated as fracture density increased. In simulations that include mineral reactions, distinct signatures of transport limitations on weathering arose when discrete flow paths were included. This transport limitation was related to both advective and diffusive processes in the highly heterogeneous systems (i.e. fractured media and correlated random permeability fields with σlnk > 3). The well-known time-dependence of mineral weathering rates was found to be the most pronounced in the fractured systems, with a departure from the maximum system-averaged dissolution rate occurring after ~100 kyr followed by a gradual decrease in the reaction rate with time that persists beyond 104 kyr.

  10. Numerical Simulations of STOVL Hot Gas Ingestion in Ground Proximity Using a Multigrid Solution Procedure

    NASA Technical Reports Server (NTRS)

    Wang, Gang

    2003-01-01

    A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.

  11. FY10 Report on Multi-scale Simulation of Solvent Extraction Processes: Molecular-scale and Continuum-scale Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wardle, Kent E.; Frey, Kurt; Pereira, Candido

    2014-02-02

    This task is aimed at predictive modeling of solvent extraction processes in typical extraction equipment through multiple simulation methods at various scales of resolution. We have conducted detailed continuum fluid dynamics simulation on the process unit level as well as simulations of the molecular-level physical interactions which govern extraction chemistry. Through combination of information gained through simulations at each of these two tiers along with advanced techniques such as the Lattice Boltzmann Method (LBM) which can bridge these two scales, we can develop the tools to work towards predictive simulation for solvent extraction on the equipment scale (Figure 1). Themore » goal of such a tool-along with enabling optimized design and operation of extraction units-would be to allow prediction of stage extraction effrciency under specified conditions. Simulation efforts on each of the two scales will be described below. As the initial application of FELBM in the work performed during FYl0 has been on annular mixing it will be discussed in context of the continuum-scale. In the future, however, it is anticipated that the real value of FELBM will be in its use as a tool for sub-grid model development through highly refined DNS-like multiphase simulations facilitating exploration and development of droplet models including breakup and coalescence which will be needed for the large-scale simulations where droplet level physics cannot be resolved. In this area, it can have a significant advantage over traditional CFD methods as its high computational efficiency allows exploration of significantly greater physical detail especially as computational resources increase in the future.« less

  12. Eddy Fluxes and Sensitivity of the Water Cycle to Spatial Resolution in Idealized Regional Aquaplanet Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Leung, Lai-Yung R.; Gustafson, William I.

    2014-02-28

    A multi-scale moisture budget analysis is used to identify the mechanisms responsible for the sensitivity of the water cycle to spatial resolution using idealized regional aquaplanet simulations. In the higher resolution simulations, moisture transport by eddies fluxes dry the boundary layer enhancing evaporation and precipitation. This effect of eddies, which is underestimated by the physics parameterizations in the low-resolution simulations, is found to be responsible for the sensitivity of the water cycle both directly, and through its upscale effect, on the mean circulation. Correlations among moisture transport by eddies at adjacent ranges of scales provides the potential for reducing thismore » sensitivity by representing the unresolved eddies by their marginally resolved counterparts.« less

  13. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  14. Multi-scale Modeling of Arctic Clouds

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  15. Hi-fidelity multi-scale local processing for visually optimized far-infrared Herschel images

    NASA Astrophysics Data System (ADS)

    Li Causi, G.; Schisano, E.; Liu, S. J.; Molinari, S.; Di Giorgio, A.

    2016-07-01

    In the context of the "Hi-Gal" multi-band full-plane mapping program for the Galactic Plane, as imaged by the Herschel far-infrared satellite, we have developed a semi-automatic tool which produces high definition, high quality color maps optimized for visual perception of extended features, like bubbles and filaments, against the high background variations. We project the map tiles of three selected bands onto a 3-channel panorama, which spans the central 130 degrees of galactic longitude times 2.8 degrees of galactic latitude, at the pixel scale of 3.2", in cartesian galactic coordinates. Then we process this image piecewise, applying a custom multi-scale local stretching algorithm, enforced by a local multi-scale color balance. Finally, we apply an edge-preserving contrast enhancement to perform an artifact-free details sharpening. Thanks to this tool, we have thus produced a stunning giga-pixel color image of the far-infrared Galactic Plane that we made publicly available with the recent release of the Hi-Gal mosaics and compact source catalog.

  16. Multi-Chambered Treatment Train (MCTT) For Treating Stormwater Runoff From Highly Polluted Source Areas

    EPA Science Inventory

    A full-scaled Multi-Chambered Treatment Train (MCTT) stormwater treatment system was tested in Taiwan during the spring and summer of 2007. The MCTT was installed in a parking lot in Ping-Lin, Northern Taiwan. The site is 85% impervious and has a drainage area to the MCTT unit of...

  17. A full scale hydrodynamic simulation of pyrotechnic combustion

    NASA Astrophysics Data System (ADS)

    Kim, Bohoon; Jang, Seung-Gyo; Yoh, Jack

    2017-06-01

    A full scale hydrodynamic simulation that requires an accurate reproduction of shock-induced detonation was conducted for design of an energetic component system. A series of small scale gap tests and detailed hydrodynamic simulations were used to validate the reactive flow model for predicting the shock propagation in a train configuration and to quantify the shock sensitivity of the energetic materials. The energetic component system is composed of four main components, namely a donor unit (HNS + HMX), a bulkhead (STS), an acceptor explosive (RDX), and a propellant (BKNO3) for gas generation. The pressurized gases generated from the burning propellant were purged into a 10 cc release chamber for study of the inherent oscillatory flow induced by the interferences between shock and rarefaction waves. The pressure fluctuations measured from experiment and calculation were investigated to further validate the peculiar peak at specific characteristic frequency (ωc = 8.3 kHz). In this paper, a step-by-step numerical description of detonation of high explosive components, deflagration of propellant component, and deformation of metal component is given in order to facilitate the proper implementation of the outlined formulation into a shock physics code for a full scale hydrodynamic simulation of the energetic component system.

  18. Framework for multi-resolution analyses of advanced traffic management strategies [summary].

    DOT National Transportation Integrated Search

    2017-01-01

    Transportation planning relies extensively on software that can simulate and predict travel behavior in response to alternative transportation networks. However, different software packages view traffic at different scales. Some programs are based on...

  19. Performance of Rehabilitated Lightweight Aggregate Asphalt Concrete Pavements Under Wet and Heated Model Mobile Load Simulator Trafficking: A Comparative Study with the TxMLS

    DOT National Transportation Integrated Search

    2000-03-01

    One-third-scale Model Mobile Load Simulator Mk3 (MMLS3) tests were conducted on US 281 in Jacksboro, Texas, adjacent to the full-scale Texas Mobile Load Simulator (TxMLS). The objectives were to investigate the moisture susceptibility and relative pe...

  20. A domain-specific analysis system for examining nuclear reactor simulation data for light-water and sodium-cooled fast reactors

    DOE PAGES

    Billings, Jay Jay; Deyton, Jordan H.; Forest Hull, S.; ...

    2015-07-17

    Building new fission reactors in the United States presents many technical and regulatory challenges. Chief among the technical challenges is the need to share and present results from new high- fidelity, high- performance simulations in an easily consumable way. In light of the modern multi-scale, multi-physics simulations can generate petabytes of data, this will require the development of new techniques and methods to reduce the data to familiar quantities of interest with a more reasonable resolution and size. Furthermore, some of the results from these simulations may be new quantities for which visualization and analysis techniques are not immediately availablemore » in the community and need to be developed. Our paper describes a new system for managing high-performance simulation results in a domain-specific way that naturally exposes quantities of interest for light water and sodium-cooled fast reactors. It enables easy qualitative and quantitative comparisons between simulation results with a graphical user interface and cross-platform, multi-language input- output libraries for use by developers to work with the data. One example comparing results from two different simulation suites for a single assembly in a light-water reactor is presented along with a detailed discussion of the system s requirements and design.« less

Top